ABSTRACT
In our daily life, the interactive roles of leaders, followers, and coordinators tend to emerge from multiparty collaboration. The primary purpose of this study is to automatically predict the leading role in multiparty interaction by ubiquitous computing techniques. Even though the leading role has been predicted for an entire task, there has been little focus on evaluating how roles are reorganized during a task. To find the verbal and nonverbal cues that might predict roles, we asked neutral third parties to select the participant playing the leading role in an assembly task. We examined the correlation between behavioral data gathered during a task and third-party evaluations of the leading role player in terms of temporal alterations. The preliminary results suggest that task-oriented utterances and verification behaviors regarding progress status contribute to the prediction of the emerging and reorganized leader. Moreover, we discuss the implications of our findings for the design of applications that can enhance multiparty collaboration.
- L. Carter, W. Haytorn, B. Shriver, and J. Lanzetta. The behavior of leaders and other group members. The Journal of Abnormal and Social Psychology, 46:589--595, 1951.Google Scholar
- D. R. Forsyth. Group dynamics, fifth edition. Wadsworth, Cengage Learning, 2009.Google Scholar
- D. Jayagopi, S. Ba, J. Odobez, and D. Gatica-Perez. Predicting two facets of cosial verticality in meetings from five-minute time slices and nonverbal cues. In Proceedings of ICMI2008, pages 45--52, 2008. Google ScholarDigital Library
- H. Kato, T. Nishimori, and T. Mochizuki. A design of collaborative learning in terms of supporting emergent division of labor. In Proceedings of CELDA 2005, pages 309--316, 2005.Google Scholar
- S. C. Lozano and B. Tversky. Communicative gestures benefit communicators. In Proceedings of CogSci2004, 2004.Google Scholar
- Y. Matsuyama, H. Taniyama, S. Fujie, and T. Kobayashi. Framework of communication activation robot participating in multiparty conversation. In Proceedings of AAAI Fall symposium, dialog with robots, pages 68--73, 2010.Google Scholar
- K. Otsuka, J. Yamato, Y. Takemae, and H. Murase. Quantifying interpersonal influence in face-to-face conversations based on visual attention patterns. In Proceedings of CHI2006, pages 1175--1179, 2006. Google ScholarDigital Library
- N. Suzuki, T. Kamiya, I. Umata, S. Ito, and S. Iwasawa. Verbal and nonverbal behaviors in group work: a case study of leadership through the assembling process. In Proceedings of ICCS2008, pages 262--265, 2008.Google Scholar
- N. Suzuki, T. Kamiya, I. Umata, S. Ito, and S. Iwasawa. Nonverbal behaviors in cooperative work: a case study of successful and unsuccessful team. In Proceedings of ICCS2010, pages 448--449, 2010.Google Scholar
Index Terms
- Analyzing the structure of the emergent division of labor in multiparty collaboration
Recommendations
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue
HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interactionIn this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is ...
Detection of division of labor in multiparty collaboration
HCI'13: Proceedings of the 15th international conference on Human Interface and the Management of Information: information and interaction for learning, culture, collaboration and business - Volume Part IIIIn the research field of human-computer interaction, there are many approaches to predicting interactive roles, e.g., conversational dominance or active participation. Although interactive roles have been predicted for entire tasks, little attention has ...
Facilitating multiparty dialog with gaze, gesture, and speech
ICMI-MLMI '10: International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal InteractionWe study how synchronized gaze, gesture and speech rendered by an embodied conversational agent can influence the flow of conversations in multiparty settings. We begin by reviewing a computational framework for turn-taking that provides the foundation ...
Comments