skip to main content
10.1145/3411763.3451690acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
poster

Towards Inclusive Streaming: Building Multimodal Music Experiences for the Deaf and Hard of Hearing

Authors Info & Claims
Published:08 May 2021Publication History

ABSTRACT

As online streaming becomes a primary method for music consumption, the various modalities that many people with hearing loss rely on for their enjoyment need to be supported. While visual or tactile representations can be used to experience music in a live event or from a recording, DRM anti-piracy encryption restricts access to audio data needed to create these multimodal experiences for music streaming. We introduce BufferBeats, a toolkit for building multimodal music streaming experiences. To explore the flexibility of the toolkit and to exhibit its potential use cases, we introduce and reflect upon building a collection of technical demonstrations that bring previous and new multimodal music experiences to streaming. Grounding our work in critical theories on design, making, and disability, as well as experiences from a small group of community partners, we argue that support for multimodal music streaming experiences will not only be more inclusive to the deaf and hard of hearing, but it will also empower researchers and hobbyist makers to use streaming as a platform to build creative new representations of music.

References

  1. [n.d.]. FairPlay. https://developer.apple.com/streaming/fps/Google ScholarGoogle Scholar
  2. [n.d.]. Shairport Sync. https://github.com/mikebrady/shairport-syncGoogle ScholarGoogle Scholar
  3. [n.d.]. Widevine. https://www.widevine.com/Google ScholarGoogle Scholar
  4. 2011. AirTunes 2 Protocol. https://git.zx2c4.com/Airtunes2/aboutGoogle ScholarGoogle Scholar
  5. 2017. open-airplay. https://github.com/jamesdlow/open-airplayGoogle ScholarGoogle Scholar
  6. 2019. Visualizer. https://developer.android.com/reference/android/media/audiofx/VisualizerGoogle ScholarGoogle Scholar
  7. 2020. Spotify Company Info. https://newsroom.spotify.com/company-info/Google ScholarGoogle Scholar
  8. Jack Banks. 1997. Video in the machine: the incorporation of music video into the recording industry. Popular music 16, 3 (1997), 293–309.Google ScholarGoogle Scholar
  9. Tony Bergstrom, Karrie Karahalios, and John C Hart. 2007. Isochords: visualizing structure in music. In Proceedings of Graphics Interface 2007. 297–304.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Jeffrey P Bigham and Patrick Carrington. 2018. Learning from the Front: People with Disabilities as Early Adopters of AI.Google ScholarGoogle Scholar
  11. Peter Carr and Dilip Madan. 1999. Option valuation using the fast Fourier transform. Journal of computational finance 2, 4 (1999), 61–73.Google ScholarGoogle ScholarCross RefCross Ref
  12. Charlie C Chen, Steven Leon, and Makoto Nakayama. 2018. Are You Hooked on Paid Music Streaming?: An Investigation into the Millennial Generation. International Journal of E-Business Research (IJEBR) 14, 1 (2018), 1–20.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Frances Choi. 2016. Equal Access Requires Full Captioning of Music and Song Lyrics for the Deaf and Hard of Hearing. Loy. LA Ent. L. Rev. 37(2016), 271.Google ScholarGoogle Scholar
  14. Peter Ciuha, Bojan Klemenc, and Franc Solina. 2010. Visualization of concurrent tones in music with colours. In Proceedings of the 18th ACM international conference on Multimedia. 1677–1680.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Alice-Ann Darrow. 1985. Music for the Deaf. Music Educators Journal 71, 6 (1985), 33–35.Google ScholarGoogle ScholarCross RefCross Ref
  16. Alice-Ann Darrow. 1993. The role of music in deaf culture: Implications for music educators. Journal of Research in Music Education 41, 2 (1993), 93–110.Google ScholarGoogle ScholarCross RefCross Ref
  17. Hannah Davis and Saif M Mohammad. 2014. Generating music from literature. arXiv preprint arXiv:1403.2124(2014).Google ScholarGoogle Scholar
  18. Alain De Cheveigné and Hideki Kawahara. 2002. YIN, a fundamental frequency estimator for speech and music. The Journal of the Acoustical Society of America 111, 4 (2002), 1917–1930.Google ScholarGoogle ScholarCross RefCross Ref
  19. D. P. Ellis.2006. Extracting Information From Music Audio. , 32-37 pages.Google ScholarGoogle Scholar
  20. David Fourney. 2012. Can computer representations of music enhance enjoyment for individuals who are hard of hearing?. In International Conference on Computers for Handicapped Persons. Springer, 535–542.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Anastasia Gumulia, BartBomiej Puzon, and Naoko Kosugi. 2011. Music visualization: predicting the perceived speed of a composition–misual project–. In Proceedings of the 19th ACM international conference on Multimedia. 949–952.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Rumi Hiraga and Nobuko Kato. 2006. Understanding Emotion Through Multimedia: Comparison Between Hearing-impaired People and People with Hearing Abilities. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (Portland, Oregon, USA) (Assets ’06). ACM, New York, NY, USA, 141–148. https://doi.org/10.1145/1168987.1169012Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Eric J Isaacson. 2005. What You See Is What You Get: on Visualizing Music.. In ISMIR. 389–395.Google ScholarGoogle Scholar
  24. ROBERT Jack, ANDREW McPherson, and TONY Stockman. 2015. Designing tactile musical devices with and for deaf users: a case study. In Proceedings of the International Conference on the Multimodal Experience of Music, Sheffield, UK.Google ScholarGoogle Scholar
  25. Maria Karam and Deborah Fels. 2008. Designing a model human cochlea: issues and challenges in crossmodal audio-haptic displays. (01 2008), 8. https://doi.org/10.4108/ICST.AMBISYS2008.2837Google ScholarGoogle ScholarCross RefCross Ref
  26. M. Karam, G. Nespoli, F. Russo, and D. I. Fels. 2009. Modelling Perceptual Elements of Music in a Vibrotactile Display for Deaf Users: A Field Study. In 2009 Second International Conferences on Advances in Computer-Human Interactions. 249–254. https://doi.org/10.1109/ACHI.2009.64Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jeeeun Kim, Swamy Ananthanarayan, and Tom Yeh. 2015. Seen music: ambient music data visualization for children with hearing impairments. In Proceedings of the 14th International Conference on Interaction Design and Children. ACM, 426–429.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Hanock Kwak and Byoung-Tak Zhang. 2016. Generating images part by part with composite generative adversarial networks. arXiv preprint arXiv:1607.05387(2016).Google ScholarGoogle Scholar
  29. Scott Lunsford. 2005. Seeking a Rhetoric of the Rhetoric of Dis/abilities. Rhetoric Review 24, 3 (2005), 330–333.Google ScholarGoogle ScholarCross RefCross Ref
  30. M Ronnier Luo, Guihua Cui, and Bryan Rigg. 2001. The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur 26, 5 (2001), 340–350.Google ScholarGoogle Scholar
  31. Thomas Mejtoft. 2015. Proceedings of the 11th Student Conference in Human–Information-Thing: Interaction Technology 2015. In Student Conference in Human–Information-Thing Interaction Technology, Umeå, May 2015. Umeå universitet.Google ScholarGoogle Scholar
  32. Sebastian Merchel and M. Ercan Altınsoy. 2014. The Influence of Vibrations on Musical Experience. Journal of the Audio Engineering Society 62 (04 2014), 220–234. https://doi.org/10.17743/jaes.2014.0016Google ScholarGoogle ScholarCross RefCross Ref
  33. Saif Mohammad and Svetlana Kiritchenko. 2018. Wikiart emotions: An annotated dataset of emotions evoked by art. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).Google ScholarGoogle Scholar
  34. Jorge Mori and Deborah I Fels. 2009. Seeing the music can animated lyrics provide access to the emotional content in music for people who are deaf or hard of hearing?. In 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH). IEEE, 951–956.Google ScholarGoogle ScholarCross RefCross Ref
  35. Fabio Morreale and Maria Eriksson. 2020. ” My Library Has Just Been Obliterated”: Producing New Norms of Use Via Software Update. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Meinard Mueller, Bryan A Pardo, Gautham J Mysore, and Vesa Valimaki. 2018. Recent Advances in Music Signal Processing [From the Guest Editors]. IEEE Signal Processing Magazine 36, 1 (2018), 17–19.Google ScholarGoogle ScholarCross RefCross Ref
  37. Suranga Chandima Nanayakkara, Lonce Wyse, Sim Heng Ong, and Elizabeth A Taylor. 2013. Enhancing musical experience for the hearing-impaired using visual and haptic displays. Human–Computer Interaction 28, 2 (2013), 115–160.Google ScholarGoogle Scholar
  38. Benjamin I Outram. 2016. Synesthesia audio-visual interactive-sound and music visualization in virtual reality with orbital observation and navigation. In 2016 IEEE International Workshop on Mixed Reality Art (MRA). IEEE, 7–8.Google ScholarGoogle ScholarCross RefCross Ref
  39. Matevž Pesek and Primož Godec. 2014. INTRODUCING A DATASET OF EMOTIONAL AND COLOR RESPONSES TO MUSIC. In Proceedings of the International Conference on Music Information Retrieval (ISMIR). Taipei.Google ScholarGoogle Scholar
  40. Benjamin Petry, Thavishi Illandara, Don Samitha Elvitigala, and Suranga Nanayakkara. 2018. Supporting Rhythm Activities of Deaf Children Using Music-Sensory-Substitution Systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). ACM, New York, NY, USA, Article 486, 10 pages. https://doi.org/10.1145/3173574.3174060Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Benjamin Petry, Thavishi Illandara, and Suranga Nanayakkara. 2016. MuSS-bits: sensor-display blocks for deaf people to explore musical sounds. In Proceedings of the 28th Australian Conference on Computer-Human Interaction. ACM, 72–80.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Stefan Pham, Franck Russel Kuipou, Stefan Arbanowski, and Stephan Steglich. 2018. On the current state of interoperable content protection for internet video streaming. In 2018 IEEE Seventh International Conference on Communications and Electronics (ICCE). IEEE, 13–17.Google ScholarGoogle ScholarCross RefCross Ref
  43. Douglas C Sicker, Paul Ohm, and Shannon Gunaji. 2006. The analog hole and the price of music: An empirical study. J. on Telecomm. & High Tech. L. 5 (2006), 573.Google ScholarGoogle Scholar
  44. Urvish Trivedi, Redwan Alqasemi, and Rajiv Dubey. 2019. Wearable musical haptic sleeves for people with hearing impairment. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. ACM, 146–151.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Clément Vasseur. 2012. Unofficial AirPlay Protocol Specification. https://nto.github.io/AirPlay.htmlGoogle ScholarGoogle Scholar
  46. Quoc V Vy, Jorge A Mori, David W Fourney, and Deborah I Fels. 2008. EnACT: A software tool for creating animated text captions. In International Conference on Computers for Handicapped Persons. Springer, 609–616.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Jonathan Weinel. 2019. Cyberdream VR: Visualizing Rave Music and Vaporwave in Virtual Reality. In Proceedings of the 14th International Audio Mostly Conference: A Journey in Sound on ZZZ. 277–281.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Marcelo Worsley, David Barel, Lydia Davison, Thomas Large, and Timothy Mwiti. 2018. Multimodal interfaces for inclusive learning. In International Conference on Artificial Intelligence in Education. Springer, 389–393.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Towards Inclusive Streaming: Building Multimodal Music Experiences for the Deaf and Hard of Hearing
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          CHI EA '21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
          May 2021
          2965 pages
          ISBN:9781450380959
          DOI:10.1145/3411763

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 8 May 2021

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • poster
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate6,164of23,696submissions,26%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format