Skip to main content
Log in

Are you emotional or depressed? Learning about your emotional state from your music using machine learning

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Music plays an important role in our society and has applications broader than just entertainment and pleasure due to its social and physiological effects. There has been recent interest in music, and two active research topics are music information retrieval and music emotion recognition, where data mining and machine learning techniques are integrated with music features and annotations to extract music information such as genres, instrument and its emotional content. In this paper, a machine learning music perception model is proposed to identify emotional content of a given audio stream and study the emotional effects of music. In fact, our developed model has the capability to determine the emotional state of a region (e.g., city) that could be utilized in applications such as marketing, and many other facets of the society such as cognitive development, education, therapy and security. This emotion recognition task is performed by mapping musical acoustic features to corresponding arousal and valence emotion indexes using a linear regression model. A radio-induced emotion dataset (RIED) is compiled from the songs broadcasted on radio in four US major cities (i.e., Houston, New York, Los Angeles and Miami) between October 21, 2017, and November 21, 2017. The RIED is then used as input to the proposed perception model to observe the regional music emotion propensity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Hardy D (1998) Creativity: flow and the psychology of discovery and invention. Pers Psychol 51(3):794

    MathSciNet  Google Scholar 

  2. Cooke D (1990) The language of music. Clarendon Press. https://global.oup.com/academic/product/the-language-of-music-9780198161806?cc=us&lang=en&#

  3. North AC, Hargreaves DJ (2008) The social and applied psychology of music. Oxford University Press, Oxford

    Book  Google Scholar 

  4. Schlaug G, Norton A, Overy K, Winner E (2005) Effects of music training on the child’s brain and cognitive development. Ann N Y Acad Sci 1060(1):219–230

    Article  Google Scholar 

  5. Isern B (1964) Music in special education. J Music Ther 1(4):139–142

    Article  Google Scholar 

  6. Bailey LM (1984) The use of songs in music therapy with cancer patients and their families. Music Ther J AAMT 3(1):5–17

    Article  Google Scholar 

  7. Parra F, Miljkovitch R, Persiaux G, Morales M, Scherer S (2017) The multimodal assessment of adult attachment security: developing the biometric attachment test. J Med Internet Res 19(4):e100

    Article  Google Scholar 

  8. Liu Y, Liu Y, Zhao Y, Hua KA (2015) What strikes the strings of your heart?—feature mining for music emotion analysis. IEEE Trans Affect Comput 6(3):247–260

    Article  Google Scholar 

  9. McKeganey SPN (2000) The rise and rise of peer education approaches. Drugs Educ Prev Policy 7(3):293–310

    Article  Google Scholar 

  10. Leming JS (1987) Rock music and the socialization of moral values in early adolescence. Youth Soc 18(4):363–383

    Article  Google Scholar 

  11. Bennett A (2000) Popular music and youth culture: music, identity and place. Macmillan Press Ltd, Houndmills, Basingstoke

    Google Scholar 

  12. Wurtzler S, Campbell BB, Huntemann N, Breiner LA (2003) Communities of the air: radio century, radio culture. Duke University Press, Durham

    Google Scholar 

  13. Kusek D, Leonhard G (2005) The future of music: manifesto for the digital music revolution. Omnibus Press, London

    Google Scholar 

  14. Downie JS (2005) Music information retrieval. Annu Rev Inf Sci Technol 37(1):295–340

    Article  Google Scholar 

  15. Yang Y-H, Chen HH (2011) Music emotion recognition. CRC Press, Cambridge

    Book  MATH  Google Scholar 

  16. Panwar S, Das A, Roopaei M, Rad P (2017) A deep learning approach for mapping music genres. In: System of Systems Engineering Conference (SoSE), 2017 12th. IEEE

  17. Baumgartner H (1992) Remembrance of things past: music, autobiographical memory, and emotion. Adv Consum Res 19:613–620

    Google Scholar 

  18. Carlson E, Saarikallio A, Toiviainen P, Bogert B, Kliuchko M, Brattico E (2015) Maladaptive and adaptive emotion regulation through music: a behavioral and neuroimaging study of males and females. Front Hum Neurosci 9:466

    Article  Google Scholar 

  19. Maratos A, Crawford MJ, Procter S (2011) Music therapy for depression: it seems to work, but how? Br J Psychiatry 199(2):92–93

    Article  Google Scholar 

  20. Reynolds G, Barry D, Burke T, Coyle E (2007) Towards a personal automatic music playlist generation algorithm: the need for contextual information. In: Proceedings of the 2nd Audio Mostly Conference: interaction with sound, Fraunhofer Institute for Digital Media Technology, Limenau, Germany, pp 84–89

  21. Masahiro N, Takaesu H, Demachi H, Oono M, Saito H (2008) Development of an automatic music selection system based on runner’s step frequency. In: Proceedings of ISMIR Conference

  22. Hargreaves DJ, North AC (1999) The functions of music in everyday life: redefining the social in music psychology. Psychol Music 27(1):71–83

    Article  Google Scholar 

  23. Stige B (2002) Culture-centered music therapy. In: The Oxford Handbook of Music Therapy

  24. Napiorkowski S (2015) Music mood recognition: state of the art review. MUS-15 July 10

  25. Padial J, Goel A Music Mood Classification. http://cs229.stanford.edu/proj2011/GoelPadial-MusicMoodClassification.pdf. Accessed 24 July 2018

  26. Laurier C et al (2007) Audio music mood classification using support vector machine. MIREX task on Audio Mood Classification, pp 2–4

  27. Thayer RE (1990) The biopsychology of mood and arousal. Oxford University Press, Oxford

    Google Scholar 

  28. Grachten M, Schedl M, Pohle T, Widmer G (2009) The ISMIR Cloud: A Decade of ISMIR Conferences at Your Fingertips. ISMIR

  29. Russell JA (1980) A circumplex model of affect. J Personal Soc Psychol 39(6):1161

    Article  Google Scholar 

  30. Panwar S (2017) Emotional effects of music using machine learning analytics. Diss. The University of Texas at San Antonio

  31. Alajanki A, Yang Y-H, Soleymani M (2017) Developing a benchmark for emotional analysis of music. PLoS ONE 12(3):e0173392

    Article  Google Scholar 

  32. Sanden C, Zhang JZ (2011) An empirical study of multi-label classifiers for music tag annotation. In: Proceedings of the 12th International Society for Music Information Retrieval Conference, ISMIR 2011, pp 717–722

  33. Eyben F, Salomão GL, Sundberg J, Scherer KR (2015) Schuller BW (2015) Emotion in the singing voice—a deeper look at acoustic features in the light of automatic classification. EURASIP J Audio Speech Music Process 1:19

    Article  Google Scholar 

  34. Misron MM, Rosli N, Manaf NA, Hali HA (2014) Music emotion classification (mec): exploiting vocal and instrumental sound features. In: Recent advances on soft computing and data mining. Springer, Cham, pp 539–549

  35. Logan B (2000) Mel frequency cepstral coefficients for music modeling. ISMIR, vol 270

  36. Schmidt EM., Turnbull D, Kim YE (2010) Feature selection for content-based, time-varying musical emotion regression. In: Proceedings of the International Conference on Multimedia Information Retrieval. ACM

  37. Stewart CA, Cockerill TM, Foster I, Hancock D, Merchant N, Skidmore E, Stanzione D, Taylor J, Tuecke S, Turner G, Vaughn M, Gaffney NI (2015) Jetstream: a self-provisioned, scalable science and engineering cloud environment. In: Proceedings of the 2015 XSEDE Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure, XSEDE '15, Article No. 29

  38. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Research 12:2825–2830

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kim-Kwang Raymond Choo.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Panwar, S., Rad, P., Choo, KK.R. et al. Are you emotional or depressed? Learning about your emotional state from your music using machine learning. J Supercomput 75, 2986–3009 (2019). https://doi.org/10.1007/s11227-018-2499-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-018-2499-y

Keywords

Navigation