ABSTRACT
As online streaming becomes a primary method for music consumption, the various modalities that many people with hearing loss rely on for their enjoyment need to be supported. While visual or tactile representations can be used to experience music in a live event or from a recording, DRM anti-piracy encryption restricts access to audio data needed to create these multimodal experiences for music streaming. We introduce BufferBeats, a toolkit for building multimodal music streaming experiences. To explore the flexibility of the toolkit and to exhibit its potential use cases, we introduce and reflect upon building a collection of technical demonstrations that bring previous and new multimodal music experiences to streaming. Grounding our work in critical theories on design, making, and disability, as well as experiences from a small group of community partners, we argue that support for multimodal music streaming experiences will not only be more inclusive to the deaf and hard of hearing, but it will also empower researchers and hobbyist makers to use streaming as a platform to build creative new representations of music.
- [n.d.]. FairPlay. https://developer.apple.com/streaming/fps/Google Scholar
- [n.d.]. Shairport Sync. https://github.com/mikebrady/shairport-syncGoogle Scholar
- [n.d.]. Widevine. https://www.widevine.com/Google Scholar
- 2011. AirTunes 2 Protocol. https://git.zx2c4.com/Airtunes2/aboutGoogle Scholar
- 2017. open-airplay. https://github.com/jamesdlow/open-airplayGoogle Scholar
- 2019. Visualizer. https://developer.android.com/reference/android/media/audiofx/VisualizerGoogle Scholar
- 2020. Spotify Company Info. https://newsroom.spotify.com/company-info/Google Scholar
- Jack Banks. 1997. Video in the machine: the incorporation of music video into the recording industry. Popular music 16, 3 (1997), 293–309.Google Scholar
- Tony Bergstrom, Karrie Karahalios, and John C Hart. 2007. Isochords: visualizing structure in music. In Proceedings of Graphics Interface 2007. 297–304.Google ScholarDigital Library
- Jeffrey P Bigham and Patrick Carrington. 2018. Learning from the Front: People with Disabilities as Early Adopters of AI.Google Scholar
- Peter Carr and Dilip Madan. 1999. Option valuation using the fast Fourier transform. Journal of computational finance 2, 4 (1999), 61–73.Google ScholarCross Ref
- Charlie C Chen, Steven Leon, and Makoto Nakayama. 2018. Are You Hooked on Paid Music Streaming?: An Investigation into the Millennial Generation. International Journal of E-Business Research (IJEBR) 14, 1 (2018), 1–20.Google ScholarDigital Library
- Frances Choi. 2016. Equal Access Requires Full Captioning of Music and Song Lyrics for the Deaf and Hard of Hearing. Loy. LA Ent. L. Rev. 37(2016), 271.Google Scholar
- Peter Ciuha, Bojan Klemenc, and Franc Solina. 2010. Visualization of concurrent tones in music with colours. In Proceedings of the 18th ACM international conference on Multimedia. 1677–1680.Google ScholarDigital Library
- Alice-Ann Darrow. 1985. Music for the Deaf. Music Educators Journal 71, 6 (1985), 33–35.Google ScholarCross Ref
- Alice-Ann Darrow. 1993. The role of music in deaf culture: Implications for music educators. Journal of Research in Music Education 41, 2 (1993), 93–110.Google ScholarCross Ref
- Hannah Davis and Saif M Mohammad. 2014. Generating music from literature. arXiv preprint arXiv:1403.2124(2014).Google Scholar
- Alain De Cheveigné and Hideki Kawahara. 2002. YIN, a fundamental frequency estimator for speech and music. The Journal of the Acoustical Society of America 111, 4 (2002), 1917–1930.Google ScholarCross Ref
- D. P. Ellis.2006. Extracting Information From Music Audio. , 32-37 pages.Google Scholar
- David Fourney. 2012. Can computer representations of music enhance enjoyment for individuals who are hard of hearing?. In International Conference on Computers for Handicapped Persons. Springer, 535–542.Google ScholarDigital Library
- Anastasia Gumulia, BartBomiej Puzon, and Naoko Kosugi. 2011. Music visualization: predicting the perceived speed of a composition–misual project–. In Proceedings of the 19th ACM international conference on Multimedia. 949–952.Google ScholarDigital Library
- Rumi Hiraga and Nobuko Kato. 2006. Understanding Emotion Through Multimedia: Comparison Between Hearing-impaired People and People with Hearing Abilities. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (Portland, Oregon, USA) (Assets ’06). ACM, New York, NY, USA, 141–148. https://doi.org/10.1145/1168987.1169012Google ScholarDigital Library
- Eric J Isaacson. 2005. What You See Is What You Get: on Visualizing Music.. In ISMIR. 389–395.Google Scholar
- ROBERT Jack, ANDREW McPherson, and TONY Stockman. 2015. Designing tactile musical devices with and for deaf users: a case study. In Proceedings of the International Conference on the Multimodal Experience of Music, Sheffield, UK.Google Scholar
- Maria Karam and Deborah Fels. 2008. Designing a model human cochlea: issues and challenges in crossmodal audio-haptic displays. (01 2008), 8. https://doi.org/10.4108/ICST.AMBISYS2008.2837Google ScholarCross Ref
- M. Karam, G. Nespoli, F. Russo, and D. I. Fels. 2009. Modelling Perceptual Elements of Music in a Vibrotactile Display for Deaf Users: A Field Study. In 2009 Second International Conferences on Advances in Computer-Human Interactions. 249–254. https://doi.org/10.1109/ACHI.2009.64Google ScholarDigital Library
- Jeeeun Kim, Swamy Ananthanarayan, and Tom Yeh. 2015. Seen music: ambient music data visualization for children with hearing impairments. In Proceedings of the 14th International Conference on Interaction Design and Children. ACM, 426–429.Google ScholarDigital Library
- Hanock Kwak and Byoung-Tak Zhang. 2016. Generating images part by part with composite generative adversarial networks. arXiv preprint arXiv:1607.05387(2016).Google Scholar
- Scott Lunsford. 2005. Seeking a Rhetoric of the Rhetoric of Dis/abilities. Rhetoric Review 24, 3 (2005), 330–333.Google ScholarCross Ref
- M Ronnier Luo, Guihua Cui, and Bryan Rigg. 2001. The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur 26, 5 (2001), 340–350.Google Scholar
- Thomas Mejtoft. 2015. Proceedings of the 11th Student Conference in Human–Information-Thing: Interaction Technology 2015. In Student Conference in Human–Information-Thing Interaction Technology, Umeå, May 2015. Umeå universitet.Google Scholar
- Sebastian Merchel and M. Ercan Altınsoy. 2014. The Influence of Vibrations on Musical Experience. Journal of the Audio Engineering Society 62 (04 2014), 220–234. https://doi.org/10.17743/jaes.2014.0016Google ScholarCross Ref
- Saif Mohammad and Svetlana Kiritchenko. 2018. Wikiart emotions: An annotated dataset of emotions evoked by art. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).Google Scholar
- Jorge Mori and Deborah I Fels. 2009. Seeing the music can animated lyrics provide access to the emotional content in music for people who are deaf or hard of hearing?. In 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH). IEEE, 951–956.Google ScholarCross Ref
- Fabio Morreale and Maria Eriksson. 2020. ” My Library Has Just Been Obliterated”: Producing New Norms of Use Via Software Update. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Meinard Mueller, Bryan A Pardo, Gautham J Mysore, and Vesa Valimaki. 2018. Recent Advances in Music Signal Processing [From the Guest Editors]. IEEE Signal Processing Magazine 36, 1 (2018), 17–19.Google ScholarCross Ref
- Suranga Chandima Nanayakkara, Lonce Wyse, Sim Heng Ong, and Elizabeth A Taylor. 2013. Enhancing musical experience for the hearing-impaired using visual and haptic displays. Human–Computer Interaction 28, 2 (2013), 115–160.Google Scholar
- Benjamin I Outram. 2016. Synesthesia audio-visual interactive-sound and music visualization in virtual reality with orbital observation and navigation. In 2016 IEEE International Workshop on Mixed Reality Art (MRA). IEEE, 7–8.Google ScholarCross Ref
- Matevž Pesek and Primož Godec. 2014. INTRODUCING A DATASET OF EMOTIONAL AND COLOR RESPONSES TO MUSIC. In Proceedings of the International Conference on Music Information Retrieval (ISMIR). Taipei.Google Scholar
- Benjamin Petry, Thavishi Illandara, Don Samitha Elvitigala, and Suranga Nanayakkara. 2018. Supporting Rhythm Activities of Deaf Children Using Music-Sensory-Substitution Systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). ACM, New York, NY, USA, Article 486, 10 pages. https://doi.org/10.1145/3173574.3174060Google ScholarDigital Library
- Benjamin Petry, Thavishi Illandara, and Suranga Nanayakkara. 2016. MuSS-bits: sensor-display blocks for deaf people to explore musical sounds. In Proceedings of the 28th Australian Conference on Computer-Human Interaction. ACM, 72–80.Google ScholarDigital Library
- Stefan Pham, Franck Russel Kuipou, Stefan Arbanowski, and Stephan Steglich. 2018. On the current state of interoperable content protection for internet video streaming. In 2018 IEEE Seventh International Conference on Communications and Electronics (ICCE). IEEE, 13–17.Google ScholarCross Ref
- Douglas C Sicker, Paul Ohm, and Shannon Gunaji. 2006. The analog hole and the price of music: An empirical study. J. on Telecomm. & High Tech. L. 5 (2006), 573.Google Scholar
- Urvish Trivedi, Redwan Alqasemi, and Rajiv Dubey. 2019. Wearable musical haptic sleeves for people with hearing impairment. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. ACM, 146–151.Google ScholarDigital Library
- Clément Vasseur. 2012. Unofficial AirPlay Protocol Specification. https://nto.github.io/AirPlay.htmlGoogle Scholar
- Quoc V Vy, Jorge A Mori, David W Fourney, and Deborah I Fels. 2008. EnACT: A software tool for creating animated text captions. In International Conference on Computers for Handicapped Persons. Springer, 609–616.Google ScholarDigital Library
- Jonathan Weinel. 2019. Cyberdream VR: Visualizing Rave Music and Vaporwave in Virtual Reality. In Proceedings of the 14th International Audio Mostly Conference: A Journey in Sound on ZZZ. 277–281.Google ScholarDigital Library
- Marcelo Worsley, David Barel, Lydia Davison, Thomas Large, and Timothy Mwiti. 2018. Multimodal interfaces for inclusive learning. In International Conference on Artificial Intelligence in Education. Springer, 389–393.Google ScholarCross Ref
Index Terms
- Towards Inclusive Streaming: Building Multimodal Music Experiences for the Deaf and Hard of Hearing
Recommendations
Understanding tensions in music accessibility through song signing for and with d/Deaf and Non-d/Deaf persons
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing SystemsSong signing is a method practiced by people who are d/Deaf and non-d/Deaf individuals to visually represent music and make music accessible through sign language and body movements. Although there is growing interest in song signing, there is a lack of ...
ViTune: A Visualizer Tool to Allow the Deaf and Hard of Hearing to See Music With Their Eyes
CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing SystemsVisualizers are usually added into music players to augment the listening experiences of hearing users. However, for the case of most members of the Deaf and Hard of Hearing (DHH) community, they have partial deafness which may give them a "limited" ...
Accessibility for Deaf and Hard of Hearing Users: Sign Language Conversational User Interfaces
CUI '20: Proceedings of the 2nd Conference on Conversational User InterfacesWith the proliferation of voice-based conversational user interfaces (CUIs) comes accessibility barriers for Deaf and Hard of Hearing (DHH) users. There has not been significant prior research on sign-language conversational interactions with ...
Comments