ABSTRACT
Analysis of emotion recognition is a young but maturing research field, for which there is an emerging need for engineering models and in particular design models. Addressing these engineering challenges of emotion recognition, we reuse and adapt results from the research field of multimodal interaction, since the expression of an emotion is intrinsically multimodal. In this paper, we refine the definition of an interaction modality for the case of passive emotion recognition. We also study the combination of modalities by applying the CARE properties. We highlight the benefits of our design model for emotion recognition.
- Bolt, R. A. "put-that-there": Voice and gesture at the graphics interface. In SIGGRAPH '80: Proceedings of the 7th annual conference on Computer graphics and interactive techniques, pages 262--270, New York, NY, USA, 1980. ACM. Google ScholarDigital Library
- Bouchet, J. Ingénierie de l'interaction multimodale en entrée: approche à composants ICARE. Ph.D. thesis, Université Joseph Fourier, Grenoble 1, 2006.Google Scholar
- Clay, A., Couture, N. and Nigay, L. Engineering affective computing: a unifying software architecture. In Proceedings of the 3rd IEEE International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009. (ACII'09), pages 1--6, 2009.Google ScholarCross Ref
- Clay, A. La branche émotion, un modèle conceptuel pour l'intégration de la reconnaissance multimodale d'émotions dans des applications interactives: application au mouvement et à la danse augmentée. Ph.D. thesis, Université Bordeaux 1, Bordeaux, 2009.Google Scholar
- Gaines, B. R., Modeling and Forecasting the Information Sciences. Information Sciences 57--58, 1991, p. 3--22. Google ScholarDigital Library
- Jaimes, A. and Sebe, N. Multimodal human-computer interaction: A survey. Comput. Vis. Image Underst., 108(1--2):116--134, 2007. Google ScholarDigital Library
- Lisetti, C. L. Le paradigme MAUI pour des agents multimodaux d'interface homme machine socialement intelligents. Revue d'Intelligence Artificielle, Numéro Spécial sur les Interactions Emotionnelles, 20(4--5):583--606, 2006.Google Scholar
- Martin, J. C. TYCOON: Theoretical framework and software tools for multimodal interfaces. Intelligence and Multimodality in Multimedia interfaces, 1998.Google Scholar
- Nigay, L. and Coutaz, J. A generic platform for addressing the multimodal challenge. In CHI '95: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 98--105, New York, NY, USA, 1995. ACM Press/Addison-Wesley Publishing Co. Google ScholarDigital Library
- OpenInterface European project. IST Framework 6 STREP funded by the European Commission (FP6-35182). www.oi-project.org.Google Scholar
- Pantic, M, Sebe, N., Cohn, J. F. and Huang T. Affective multimodal human-computer interaction. In MULTIMEDIA '05: Proceedings of the 13th annual ACM international conference on Multimedia, pages 669--676, New York, NY, USA, 2005. ACM. Google ScholarDigital Library
- Scherer, K. R. On the nature and function of emotion: a component process approach. Approaches to emotion. NJ: Erlbaum, Hillsdale, k.r. scherer and p. ekman (eds.) edition, 1984.Google Scholar
- Scherer, K. R. Feelings integrate the central representation of appraisal-driven response organization in emotion. In Feelings and emotions: The Amsterdam symposium, pages 136--157, 2004.Google ScholarCross Ref
- Zeng, Z., Pantic, M., Roisman, G. I. and Huang, T. S. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1):39--58, 2009 Google ScholarDigital Library
Index Terms
- Reconnaissance d'Emotions: un point de vue interaction multimodale
Recommendations
Multimodal Emotion Expressions of Virtual Agents, Mimic and Vocal Emotion Expressions and Their Effects on Emotion Recognition
ACII '13: Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent InteractionEmotional expressions of virtual agents are widely believed to enhance the interaction with the user by utilizing more natural means of communication. However, as a result of the current technology virtual agents are often only able to produce facial ...
Emotion recognition from unimodal to multimodal analysis: A review
AbstractThe omnipresence of numerous information sources in our daily life brings up new alternatives for emotion recognition in several domains including e-health, e-learning, robotics, and e-commerce. Due to the variety of data, the research area of ...
Highlights- In this paper, a review of unimodal and multimodal emotion recognition’s evolution is given.
- Binary classification of emotional dimension is the most used in literature.
- For each modality, deep learning gives better results than ...
HAAN-ERC: hierarchical adaptive attention network for multimodal emotion recognition in conversation
AbstractMultimodal emotional expressions affect the progress of conversation in complex ways in our lives. For multimodal emotion recognition in conversation (ERC), previous studies focus on modeling partial influences of speaker and modality to infer ...
Comments