skip to main content
10.1145/1538864.1538866acmotherconferencesArticle/Chapter ViewAbstractPublication PagescasemansConference Proceedingsconference-collections
research-article

Evaluating crossmodal awareness of daily-partner robot to user's behaviors with gaze and utterance detection

Published:11 May 2009Publication History

ABSTRACT

This paper proposes a daily-partner robot, that is aware of the user's situation or behavior by using gaze and utterance detection. For appropriate and familiar anthropomorphic interaction, the robot should wait for a timing to talk something to the user corresponding to the situation of her/him while she/he doing a task or thinking. According to the need, our proposed robot i) estimates the user's context by detecting her/his gaze and utterance, such as the target of the user's speech, ii) tries to notify the need to speak to the user by silent (i.e. without making an utterance) gazeturns toward the user and joint attention with taking advantage of the attentiveness, and iii) tell the message when the user talks to the robot. The results of experiments combining subjects' daily tasks with/without the above steps show that the crossmodal-aware behaviors of the robot are important in respectful communications without disturbing the user's ongoing task by adopting silent behaviors showing the robot's intention to speak and for drawing the user's attention.

References

  1. R. W. Picard, Affective computing, MIT Press, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Kendon. Some functions of gaze-direction in social interaction. Acta Psychologica, 26:22--63, 1967.Google ScholarGoogle ScholarCross RefCross Ref
  3. C. Moore, P. J. Dunham, and P. Dunham. Joint Attention: Its Origins and Role in Development. Lawrence Erlbaum, 1995.Google ScholarGoogle Scholar
  4. C. Peters, C. Pelachaud, E. Bevacqua, M. Mancini, and I. Poggi, A Model of Attention and Interest Using Gaze Behavior, IVA2005, pp. 229--240, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. T. Yonezawa, H. Yamazoe, A. Utsumi, and S. Abe. Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking, ICMI2007, pp. 140--145, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Fukayama, T. Ohno, N. Mukawa, M. Sawaki, and N. Hagita. Messages embedded in gaze of interface agents -- impression management with agent's gaze-. In Proc. ACM SIGCHI2002, vol. 1, pp. 41--49, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. H. Kojima. Infanoid: A babybot that explores the social environment. Socially Intelligent Agents, K. Dautenhahn and A. H. Bond and L. Canamero and B. Edmonds, Kluwer Academic Publishers, pp. 157--164, 2002.Google ScholarGoogle Scholar
  8. B. Mutlu. The design of gaze behavior for embodied social interfaces, CHI'08 extended abstracts, pp. 2661--2664, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Y. Yoshikawa, K. Shinozawa, H. Ishiguro, N. Hagita, and T. Miyamoto. The effects of responsive eye movement and blinking behavior in a communication robot. In Proc. IROS2006, pp. 4564--4569, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  10. H. Yamazoe, A. Utsumi, T. Yonezawa, and S. Abe. Remote Gaze Estimation with a Single Camera Based on Facial-Feature Tracking without Special Calibration Actions, ETRA2008, pp. 245--250, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. B. Koons, C. J. Sparrell, and K. R. Thorisson. Integrating Simultaneous Input from Speech, Gaze, and Hand Gestures, Intelligent multimedia interfaces, MIT Press, pp. 257--276, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. B. Schilit, N. Adams, and R. Want. Context-aware computing applications, WMCSA'94, pp. 89--101, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. T. Selker and W. Burleson. Context-Aware Design and Interaction in Computer Systems, Workshop on Software Engineering for Wearable and Pervasive Computing, 2000.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. P. P. Maglio and C. S. Campbell, Gaze and Speech in Attentive User Interfaces, ICMI2000, pp. 1--7, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. P. Maglio, T. Matlock, C. S. Campbell, S. Zhai, and B. A. Smith. Attentive agents, Communications of the ACM, 46(3), pp. 47--51, 200.3. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. T. Selker. Visual attentive interfaces, BT Technology Journal, 22(4), pp. 146--150, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. L. Karl, M. Pettey, and B. Shneiderman, Speech-Activated versus Mouse-Activated Commands for Word Processing Applications: An Empirical Evaluation International Journal on Man-Machine Studies, 1993.Google ScholarGoogle Scholar
  18. L. E. Sibert and R. Jacob. Evaluation of eye gaze interaction, CHI2000, pp. 281--288, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. E. Castellina, F. Corno and P. Pellegrino. Integrated Speech and Gaze Control for Realistic Desktop Environments, ETRA2008, pp. 79--82, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. D. Miniotas, O. Spakov, I. Tugoy, and I. S. MacKenzie. Speech-Augmented Eye Gaze Interaction with Small Closely Spaced Targets, ETRA2006, pp. 67--72, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. Imai, T. Ono, and H. Ishiguro. Physical relation and expression: Joint attention for human-robot interaction. In IEEE Int. Workshop on Robot and Human Communication, pp. 512--517, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  22. I. Haritaoglu, A. Cozzi, D. Kons, M. Flikner, D. Zotkin, R. Duraiswami, and Y. Yacoob. Attentive toys, ICME2001, pp. 1124--1127, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  23. T. Yonezawa, H. Yamazoe, A. Utsumi, and S. Abe. GazeRoboard: Gaze-communicative Guide System in Daily Life on Stuffed-toy Robot with Interactive Display Board, IROS2008, pp. 1204--1209, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  24. C. L. Bethel and R. R. Murphy Affective expression in appearance constrained robots Proc. of ACM SIGCHI/SIGART conference on Human-robot interaction, pp. 327--328, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. Sekiguchi, M. Inami, and S. Tachi. RobotPHONE: RUI for interpersonal communication. In CHI2001 Extended Abstracts, pp. 277--278, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. S. Basu, B. Clarkson, and A. Pentland. Smart Headphones: Enancing Auditory Awareness through Robust Speech Detection and Source Localization, ICASSP01, Vol. 5, pp. 3361--3364, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Applied Science Laboratories. Mobile Eye: Lightweight Tetherless Eye Tracking, http://www.a-s-l.com/products/mobileeye.htmGoogle ScholarGoogle Scholar
  28. R. Newman, Y. Matsumoto, S. Rougeaux, and A. Zelinsky. Real-time stereo tracking for head pose and gaze estimation In Proc. Int. Conf. Automatic Face and Gesture Recognition, (FG2000), pp. 122--128, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. T. Ohno, N. Mukawa, and S. Kawato. Just blink your eyes: A head-free gaze tracking system. In Proc. CHI2003, pp. 950--951, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. C. H. Morimoto and M. R. M. Mimica. Eye gaze tracking techniques for interactive applications, Computer Vision and Image Understanding, 98(1), pp. 4--24, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  31. J. Wang, E. Sung, and R. Venkateswarlu. Estimating the eye gaze from one eye, Computer Vision and Image Understanding, 98(1), Pages 83--103, 2005.Google ScholarGoogle Scholar

Index Terms

  1. Evaluating crossmodal awareness of daily-partner robot to user's behaviors with gaze and utterance detection

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      Casemans '09: Proceedings of the 3rd ACM International Workshop on Context-Awareness for Self-Managing Systems
      May 2009
      61 pages
      ISBN:9781605584393
      DOI:10.1145/1538864

      Copyright © 2009 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 May 2009

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader