skip to main content
research-article

I See What You're Hearing: Facilitating The Effect of Environment on Perceived Emotion While Teleconferencing

Published:16 April 2023Publication History
Skip Abstract Section

Abstract

Our perception of emotion is highly contextual. Changes in the environment can affect our narrative framing, and thus augment our emotional perception of interlocutors. User environments are typically heavily suppressed due to the technical limitations of commercial videoconferencing platforms. As a result, there is often a lack of contextual awareness while participating in a video call, and this affects how we perceive the emotions of conversants. We present a videoconferencing module that visualizes the user's aural environment to enhance awareness between interlocutors. The system visualizes environmental sound based on its semantic and acoustic properties. We found that our visualization system was about 50% effective at eliciting emotional perceptions in users that was similar to the response elicited by environmental sound it replaced.The contributed system provides a unique approach to facilitate ambient awareness on an implicit emotional level in situations where multimodal environmental context is suppressed.

References

  1. S. O. Adalgeirsson and C. Breazeal. 2010. MeBot: A robotic platform for socially embodied telepresence. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 15--22. https://doi.org/10.1109/HRI.2010.5453272 ISSN: 2167--2148.Google ScholarGoogle ScholarCross RefCross Ref
  2. Andrew J Aubrey, David Marshall, Paul L Rosin, Jason Vendeventer, Douglas W Cunningham, and Christian Wallraven. 2013. Cardiff conversation database (ccdb): A database of natural dyadic conversations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 277--282.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Lisa Feldman Barrett, Batja Mesquita, and Maria Gendron. 2011. Context in emotion perception. Current Directions in Psychological Science 20, 5 (2011), 286--290.Google ScholarGoogle ScholarCross RefCross Ref
  4. Tony Bergstrom and Karrie Karahalios. 2007. Conversation Clock: Visualizing audio patterns in co-located groups. In 2007 40th Annual Hawaii International Conference on System Sciences (HICSS'07). IEEE, 78--78.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Paul H Bucci, X Laura Cang, Hailey Mah, Laura Rodgers, and Karon E MacLean. 2019. Real Emotions Don't Stand Still: Toward Ecologically Viable Representation of Affective Interaction. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 1--7.Google ScholarGoogle ScholarCross RefCross Ref
  6. Mark Cartwright, Ayanna Seals, Justin Salamon, Alex Williams, Stefanie Mikloska, Duncan MacConnell, Edith Law, Juan P Bello, and Oded Nov. 2017. Seeing sound: Investigating the effects of visualizations and complexity on crowdsourced audio annotations. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (2017), 1--21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Yi-Chuan Chen, Pi-Chun Huang, Andy Woods, and Charles Spence. 2016. When ?Bouba" equals "Kiki": Cultural commonalities and cultural differences in sound-shape correspondences. Scientific reports 6, 1 (2016), 1--9.Google ScholarGoogle Scholar
  8. Jeremy R Cooperstock. 2010. Multimodal telepresence systems. IEEE Signal Processing Magazine 28, 1 (2010), 77--86.Google ScholarGoogle ScholarCross RefCross Ref
  9. Ferdinand De Saussure. 2011. Course in general linguistics. Columbia University Press.Google ScholarGoogle Scholar
  10. J Donath. 2000. Visiphone: Connecting domestic spaces with audio. In International Conference on Auditory Display, Atlanta, April 2000.Google ScholarGoogle Scholar
  11. Paul Dourish and Sara Bly. 1992. Portholes: Supporting awareness in a distributed work group. In Proceedings of the SIGCHI conference on Human factors in computing systems. 541--547.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Annette D'Onofrio. 2014. Phonetic detail and dimensionality in sound-shape correspondences: Refining the bouba-kiki paradigm. Language and speech 57, 3 (2014), 367--393.Google ScholarGoogle Scholar
  13. Dalia El Badawy, Ivan Dokmanic, and Martin Vetterli. 2017. Acoustic DoA estimation by one unsophisticated sensor. In International Conference on Latent Variable Analysis and Signal Separation. Springer, 89--98.Google ScholarGoogle ScholarCross RefCross Ref
  14. Christen Erlingsson and Petra Brysiewicz. 2017. A hands-on guide to doing content analysis. African Journal of Emergency Medicine 7, 3 (2017), 93--99.Google ScholarGoogle ScholarCross RefCross Ref
  15. Charles Forceville. 2009. Non-verbal and multimodal metaphor in a cognitivist framework: Agendas for research. In Multimodal metaphor. De Gruyter Mouton, 19--44.Google ScholarGoogle Scholar
  16. Raymond W Gibbs Jr. 2011. Evaluating conceptual metaphor theory. Discourse processes 48, 8 (2011), 529--562.Google ScholarGoogle Scholar
  17. Aviva I. Goller, Leun J. Otten, and Jamie Ward. 2009. Seeing sounds and hearing colors: An event-related potential study of auditory--visual synesthesia. Journal of Cognitive Neuroscience 21, 10 (10 2009), 1869--1881. https://doi.org/10.1162/jocn.2009.21134 arXiv:https://direct.mit.edu/jocn/article-pdf/21/10/1869/1937500/jocn.2009.21134.pdfGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  18. Katharine H Greenaway, Elise K Kalokerinos, and Lisa A Williams. 2018. Context is everything (in emotion research). Social and Personality Psychology Compass 12, 6 (2018), e12393.Google ScholarGoogle ScholarCross RefCross Ref
  19. Chris Greenhalgh and Steven Benford. 1995. MASSIVE: A collaborative virtual environment for teleconferencing. ACM Transactions on Computer-Human Interaction (TOCHI) 2, 3 (1995), 239--261.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Joshua Hailpern, Karrie Karahalios, and James Halle. 2009. Creating a spoken impact: encouraging vocalization through audio visual feedback in children with ASD. In Proceedings of the SIGCHI conference on human factors in computing systems. 453--462.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Shawn Hershey, Sourish Chaudhuri, Daniel P. W. Ellis, Jort F. Gemmeke, Aren Jansen, R. Channing Moore, Manoj Plakal, Devin Platt, Rif A. Saurous, Bryan Seybold, Malcolm Slaney, Ron J. Weiss, and Kevin Wilson. 2017. CNN architectures for large-scale audio classification. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 131--135. https://doi.org/10.1109/ICASSP.2017.7952132Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Karrie Karahalios and Tony Bergstrom. 2009. Social mirrors as social signals: Transforming audio into graphics. IEEE computer graphics and applications 29, 5 (2009), 22--32.Google ScholarGoogle Scholar
  23. Atsunobu Kimura, Masayuki Ihara, Minoru Kobayashi, Yoshitsugu Manabe, and Kunihiro Chihara. 2007. Visual feedback: its effect on teleconferencing. In International Conference on Human-Computer Interaction. Springer, 591--600.Google ScholarGoogle ScholarCross RefCross Ref
  24. Wolfgang Köhler. 1947. Gestalt psychology: an introduction to new concepts in modern psychology. Liveright Pub. Corp.Google ScholarGoogle Scholar
  25. George Lakoff and Mark Johnson. 2008. Metaphors we live by. University of Chicago press.Google ScholarGoogle Scholar
  26. Jeana L Magyar-Moe. 2009. Therapist's guide to positive psychological interventions. Academic press.Google ScholarGoogle Scholar
  27. Tara Matthews, Janette Fong, F Wai-Ling Ho-Ching, and Jennifer Mankoff. 2006. Evaluating non-speech sound visualizations for the deaf. Behaviour & Information Technology 25, 4 (2006), 333--351.Google ScholarGoogle ScholarCross RefCross Ref
  28. Hadar Nesher Shoshan and Wilken Wehrt. [n. d.]. Understanding ?Zoom fatigue": A mixed-method approach. Applied Psychology n/a, n/a ([n. d.]). https://doi.org/10.1111/apps.12360 arXiv:https://iaap-journals.onlinelibrary.wiley.com/doi/pdf/10.1111/apps.12360Google ScholarGoogle ScholarCross RefCross Ref
  29. Bryan C Pijanowski, Luis J Villanueva-Rivera, Sarah L Dumyahn, Almo Farina, Bernie L Krause, Brian M Napoletano, Stuart H Gage, and Nadia Pieretti. 2011. Soundscape ecology: the science of sound in the landscape. BioScience 61, 3 (2011), 203--216.Google ScholarGoogle ScholarCross RefCross Ref
  30. Vilayanur S Ramachandran and Edward M Hubbard. 2001. Synaesthesia--a window into perception, thought and language. Journal of consciousness studies 8, 12 (2001), 3--34.Google ScholarGoogle Scholar
  31. Asta Roseway, Yuliya Lutchyn, Paul Johns, Elizabeth Mynatt, and Mary Czerwinski. 2015. BioCrystal: An Ambient tool for emotion and communication. International Journal of Mobile Human Computer Interaction (IJMHCI) 7, 3 (2015), 20--41.Google ScholarGoogle ScholarCross RefCross Ref
  32. Abigail Sellen, Bill Buxton, and John Arnott. 1992. Using spatial cues to improve videoconferencing. In Proceedings of the SIGCHI conference on Human factors in computing systems. 651--652.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Oliver Stefani, Milind Mahale, Achim Pross, and Matthias Bues. 2011. SmartHeliosity: emotional ergonomics through coloured light. In International Conference on Ergonomics and Health Aspects of Work with Computers. Springer, 226--235.Google ScholarGoogle ScholarCross RefCross Ref
  34. David Watson, Lee Anna Clark, and Auke Tellegen. 1988. Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of personality and social psychology 54, 6 (1988), 1063.Google ScholarGoogle ScholarCross RefCross Ref
  35. Jacob O Wobbrock, Leah Findlater, Darren Gergle, and James J Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI conference on human factors in computing systems. 143--146.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. I See What You're Hearing: Facilitating The Effect of Environment on Perceived Emotion While Teleconferencing

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Proceedings of the ACM on Human-Computer Interaction
        Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue CSCW1
        CSCW
        April 2023
        3836 pages
        EISSN:2573-0142
        DOI:10.1145/3593053
        Issue’s Table of Contents

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 16 April 2023
        Published in pacmhci Volume 7, Issue CSCW1

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
      • Article Metrics

        • Downloads (Last 12 months)159
        • Downloads (Last 6 weeks)9

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader