skip to main content
research-article

Investigating Perceived Emotional Correlates of Rhythmic Density in Algorithmic Music Composition

Published:26 June 2015Publication History
Skip Abstract Section

Abstract

Affective algorithmic composition is a growing field that combines perceptually motivated affective computing strategies with novel music generation. This article presents work toward the development of one application. The long-term goal is to develop a responsive and adaptive system for inducing affect that is both controlled and validated by biophysical measures. Literature documenting perceptual responses to music identifies a variety of musical features and possible affective correlations, but perceptual evaluations of these musical features for the purposes of inclusion in a music generation system are not readily available. A discrete feature, rhythmic density (a function of note duration in each musical bar, regardless of tempo), was selected because it was shown to be well-correlated with affective responses in existing literature. A prototype system was then designed to produce controlled degrees of variation in rhythmic density via a transformative algorithm. A two-stage perceptual evaluation of a stimulus set created by this prototype was then undertaken. First, listener responses from a pairwise scaling experiment were analyzed via Multidimensional Scaling Analysis (MDS). The statistical best-fit solution was rotated such that stimuli with the largest range of variation were placed across the horizontal plane in two dimensions. In this orientation, stimuli with deliberate variation in rhythmic density appeared farther from the source material used to generate them than from stimuli generated by random permutation. Second, the same stimulus set was then evaluated according to the order suggested in the rotated two-dimensional solution in a verbal elicitation experiment. A Verbal Protocol Analysis (VPA) found that listener perception of the stimulus set varied in at least two commonly understood emotional descriptors, which might be considered affective correlates of rhythmic density. Thus, these results further corroborate previous studies wherein musical parameters are monitored for changes in emotional expression and that some similarly parameterized control of perceived emotional content in an affective algorithmic composition system can be achieved and provide a methodology for evaluating and including further possible musical features in such a system. Some suggestions regarding the test procedure and analysis techniques are also documented here.

References

  1. C. Ames. 1989. The Markov process as a compositional model: A survey and tutorial. Leonardo. 22, 2, 175--187.Google ScholarGoogle ScholarCross RefCross Ref
  2. B. Astill. 1994. An investigation of social values in a senior secondary school milieu. In Proceedings of the Australian Association for Research in Education Conference.Google ScholarGoogle Scholar
  3. J.-J. Aucouturier, F. Pachet, and M. Sandler. 2005. “The way it sounds”: Timbre models for analysis and retrieval of music signals. IEEE Transactions in Multimedia 7, 1028--1035. doi:10.1109/TMM.2005.858380 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. E. Bigand, S. Filipic, and P. Lalitte. 2005. The time course of emotional responses to music. Annals of the New York Academy of Science 1060, 429--437.Google ScholarGoogle ScholarCross RefCross Ref
  5. E. Bigand, S. Vieillard, F. Madurell, J. Marozeau, and A. Dacquet. 2005. Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition and Emotion 19, 1113--1139. doi:10.1080/02699930500204250Google ScholarGoogle ScholarCross RefCross Ref
  6. D. Bolger. 2004. Computational models of musical timbre and the analysis of its structure in melody. PhD Thesis, University of Limerick.Google ScholarGoogle Scholar
  7. J. Bresson, C. Agon, and G. Assayag. 2005. Openmusic 5: A cross-platform release of the computer-assisted composition environment. In 10th Brazilian Symposium on Computer Music, Belo Horizonte, MG, Brésil.Google ScholarGoogle Scholar
  8. N. Collins. 2009. Musical form and algorithmic composition. Contemporary Music Review 28, 103--114. doi:10.1080/07494460802664064Google ScholarGoogle ScholarCross RefCross Ref
  9. D. Cope. 1989. Experiments in musical intelligence (EMI): Non-linear linguistic-based composition. Journal of New Music Research 18, 117--139.Google ScholarGoogle Scholar
  10. D. Cope. 1992. Computer modeling of musical intelligence in EMI. Computer Music Journal 16, 2, 69--83.Google ScholarGoogle ScholarCross RefCross Ref
  11. D. Cope and M. J. Mayer. 1996. Experiments in Musical Intelligence. AR Editions, Madison, WI.Google ScholarGoogle Scholar
  12. T. Eerola and J. K. Vuoskoski. 2010. A comparison of the discrete and dimensional models of emotion in music. Psychology of Music 39, 18--49. doi:10.1177/0305735610362821Google ScholarGoogle ScholarCross RefCross Ref
  13. A. Friberg, E. Schoonderwaldt, and A. Hedblad. 2011. Perceptual ratings of musical parameters. In Heinz von Loesch and Stefan Weinzierl, eds. Gemessene Interpretation - Computergestützte Aufführungsanalyse im Kreuzverhör der Disziplinen. Schott, 237--253.Google ScholarGoogle Scholar
  14. A. Gabrielsson. 2001. Emotion perceived and emotion felt: Same or different? Music Science Special Issue, 2001--2002, 123--147.Google ScholarGoogle ScholarCross RefCross Ref
  15. L. Gagnon and I. Peretz. 2003. Mode and tempo relative contributions to “happy-sad” judgements in equitone melodies. Cognition and Emotion 17, 25--40.Google ScholarGoogle ScholarCross RefCross Ref
  16. E. E. Hannon, J. S. Snyder, T. Eerola, and C. L. Krumhansl. 2004. The role of melodic and temporal cues in perceiving musical meter. Journal of Experimental Psychology: Human Perception and Performance 30, 956--974. doi:10.1037/0096-1523.30.5.956Google ScholarGoogle ScholarCross RefCross Ref
  17. K. Hevner. 1936. Experimental studies of the elements of expression in music. American Journal of Psychology 48, 246--268.Google ScholarGoogle ScholarCross RefCross Ref
  18. L. Hiller and L. M. Isaacson. 1957. Illiac Suite, for String Quartet. New Music Edition.Google ScholarGoogle Scholar
  19. P. N. Juslin and P. Laukka. 2004. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research 33, 217--238.Google ScholarGoogle ScholarCross RefCross Ref
  20. P. N. Juslin and J. A. Sloboda. 2010. Handbook of Music and Emotion: Theory, Research, Applications. Oxford University Press, Oxford.Google ScholarGoogle Scholar
  21. A. Kirke and E. Miranda. 2011. Combining EEG frontal asymmetery studies with affective algorithmic composition and expressive performance models. In Proceedings of International Computer Music Conference (ICMC’11). Huddersfield, UK.Google ScholarGoogle Scholar
  22. A. Kirke, E. Miranda, and S. J. Nasuto. 2013. Artificial affective listening towards a machine learning tool for sound-based emotion therapy and control. In Proceedings of the Sound and Music Computing Conference. 259--265.Google ScholarGoogle Scholar
  23. T. Kohonen. 1989. A self-learning musical grammar, or “associative memory of the second kind.” In Proceedings of the 1989 International Joint Conference on Neural Networks. IEEE, 1--5.Google ScholarGoogle ScholarCross RefCross Ref
  24. J. Kratus. 1993. A developmental study of children's interpretation of emotion in music. Psychology of Music 21, 3--19.Google ScholarGoogle ScholarCross RefCross Ref
  25. J. B. Kruskal. 1964. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29, 1--27.Google ScholarGoogle ScholarCross RefCross Ref
  26. O. Ladinig and E. G. Schellenberg. 2012. Liking unfamiliar music: Effects of felt emotion and individual differences. Psychology of Aesthetics, Creativity and the Arts 6, 146.Google ScholarGoogle ScholarCross RefCross Ref
  27. A. Lamont and T. Eerola. 2011. Music and emotion: Themes and development. Music Science 15, 139--145. doi:10.1177/1029864911403366Google ScholarGoogle ScholarCross RefCross Ref
  28. F. Lerdahl and R. Jackendoff. 1983. An overview of hierarchical structure in music. Music Perception 1, 2, 229--252.Google ScholarGoogle ScholarCross RefCross Ref
  29. S. R. Livingstone, R. Muhlberger, and A. Brown. 2006. Influencing perceived musical emotions: The importance of performative and structural aspects in a rule system. In Music as Human Communication: An HCSNet Workshop on the Science of Music Perception, Performance and Cognition. Bankstown, NSW, Australia.Google ScholarGoogle Scholar
  30. M. M. Marin and J. Bhattacharya. 2010. Music induced emotions: Some current issues and cross-modal comparisons. In Joao Hermida and Mariana Ferreo, eds. Music Education. Nova Science Publishers, Hauppauge, NY, 1--38.Google ScholarGoogle Scholar
  31. A. Mattek. 2011. Emotional communication in computer generated music: Experimenting with affective algorithms. In Proceedings of the 26th Annual Conference of the Society for Electro-Acoustic Music in the United States. Presented at the SEAMUS, University of Miami Frost School of Music, Miami, Florida.Google ScholarGoogle Scholar
  32. E. R. Miranda. 2001. Composing Music with Computers, 1st ed. Music technology series. Focal Press, Oxford/Boston. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. G. Nierhaus. 2009. Algorithmic Composition Paradigms of Automated Music Generation. Springer, Wien/New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. A. P. Oliveira and A. Cardoso. 2010. A musical system for emotional expression. Knowledge-Based Systems 23, 901--913. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. G. Papadopoulos and G. Wiggins. 1999. AI methods for algorithmic composition: A survey, a critical view and future prospects. In Proceedings of the AISB Symposium on Musical Creativity. Edinburgh, UK, 110--117.Google ScholarGoogle Scholar
  36. C. Raphael. 2001. Automated rhythm transcription. In Proceedings of ISMIR 2001.Google ScholarGoogle Scholar
  37. R. Rowe. 1992. Interactive Music Systems: Machine Listening and Composing. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. J. A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 1161.Google ScholarGoogle ScholarCross RefCross Ref
  39. J. A. Russell. 2003. Core affect and the psychological construction of emotion. Psychology Review 110, 145.Google ScholarGoogle ScholarCross RefCross Ref
  40. J. A. Russell and L. F. Barrett. 1999. Core affect, prototypical emotional episodes, and other things called emotion: Dissecting the elephant. Journal of Personality and Social Psychology 76, 805.Google ScholarGoogle ScholarCross RefCross Ref
  41. S. Schachter and J. Singer. 1962. Cognitive, social, and physiological determinants of emotional state. Psychology Review 69, 379--399. doi:10.1037/h0046234Google ScholarGoogle ScholarCross RefCross Ref
  42. K. R. Scherer. 2004. Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them? Journal of New Music Research 33, 239--251. doi:10.1080/0929821042000317822Google ScholarGoogle ScholarCross RefCross Ref
  43. L. A. Schmidt and L. J. Trainor. 2001. Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions. Cognition and Emotion 15, 487--500.Google ScholarGoogle ScholarCross RefCross Ref
  44. E. Schubert and J. Wolfe. 2006. Does timbral brightness scale with frequency and spectral centroid. Acta Acusta United with Acusta 92, 820--825.Google ScholarGoogle Scholar
  45. T. Sugimoto, R. Legaspi, A. Ota, K. Moriyama, S. Kurihara, and M. Numao. 2008. Modelling affective-based music compositional intelligence with the aid of ANS analyses. Knowledge-Based Systems 21, 200--208. doi:10.1016/j.knosys.2007.11.010 Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. L. Taruffi and S. Koelsch. 2014. The paradox of music-evoked sadness: An online survey. PloS One 9, e110490.Google ScholarGoogle ScholarCross RefCross Ref
  47. D. Västfjäll. 2001. Emotion induction through music: A review of the musical mood induction procedure. Music Science Special Issue, 2001--2002, 173--211.Google ScholarGoogle ScholarCross RefCross Ref
  48. S. Vieillard, I. Peretz, N. Gosselin, S. Khalfa, L. Gagnon, and B. Bouchard. 2008. Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition and Emotion 22, 720--752.Google ScholarGoogle ScholarCross RefCross Ref
  49. J. K. Vuoskoski and T. Eerola. 2011. Measuring music-induced emotion: A comparison of emotion models, personality biases, and intensity of experiences. Music Science 15, 159--173. doi:10.1177/1029864911403367Google ScholarGoogle ScholarCross RefCross Ref
  50. J. K. Vuoskoski and T. Eerola. 2012. Can sad music really make you sad? Indirect measures of affective states induced by music and autobiographical memories. Psychology of Aesthetics, Creativity and the Arts 6, 204.Google ScholarGoogle ScholarCross RefCross Ref
  51. J. K. Vuoskoski, W. F. Thompson, D. McIlwain, and T. Eerola. 2012. Who enjoys listening to sad music and why? Music Perception 29, 311--317.Google ScholarGoogle ScholarCross RefCross Ref
  52. I. Wallis, T. Ingalls, E. Campana, and J. Goodman. 2011. A rule-based generative music system controlled by desired valence and arousal. In Proceedings of the Sound and Music Computing Conference. Presented at the Sound and Music Computing, SMC, SMC Network.Google ScholarGoogle Scholar
  53. K. C. Wassermann, K. Eng, and P. F. Verschure. 2003. Live soundscape composition based on synthetic emotions. IEEE Multimedia 10, 82--90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. D. L. Wessel. 1979. Timbre space as a musical control structure. Computer Music Journal 3, 45--52.Google ScholarGoogle ScholarCross RefCross Ref
  55. N. Whiteley, A. T. Cemgil, and S. Godsill. 2007. Sequential inference of rhythmic structure in musical audio. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’07). IEEE, pp. IV--1321--IV--1324.Google ScholarGoogle Scholar
  56. D. Williams, A. Kirke, E. R. Miranda, E. B. Roesch, and S. J. Nasuto. 2013. Towards affective algorithmic composition. In Proceedings of the 3rd International Conference on Music & Emotion (ICME’’13).Google ScholarGoogle Scholar
  57. J. Wingstedt, M. Liljedahl, S. Lindberg, and J. Berg. 2005. Remupp: An interactive tool for investigating musical properties and relations. In Proceedings of the 2005 Conference on New Interfaces for Musical Expression. National University of Singapore, 232--235. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. T.-L. Wu and S.-K. Jeng. 2008. Probabilistic estimation of a novel music emotion model. In Advances in Multimedia Modeling. Springer, 487--497. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. M. Zentner, D. Grandjean, and K. R. Scherer. 2008. Emotions evoked by the sound of music: Characterization, classification, and measurement. Emotion 8, 494--521. doi:10.1037/1528-3542.8.4.494Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Investigating Perceived Emotional Correlates of Rhythmic Density in Algorithmic Music Composition

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Applied Perception
          ACM Transactions on Applied Perception  Volume 12, Issue 3
          July 2015
          92 pages
          ISSN:1544-3558
          EISSN:1544-3965
          DOI:10.1145/2798084
          Issue’s Table of Contents

          Copyright © 2015 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 26 June 2015
          • Revised: 1 March 2015
          • Accepted: 1 March 2015
          • Received: 1 October 2014
          Published in tap Volume 12, Issue 3

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader