skip to main content
10.1145/1322192.1322218acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking

Authors Info & Claims
Published:12 November 2007Publication History

ABSTRACT

This paper proposes a gaze-communicative stuffed-toy robot system with joint attention and eye-contact reactions based on ambient gaze-tracking. For free and natural interaction, we adopted our remote gaze-tracking method. Corresponding to the user's gaze, the gaze-reactive stuffed-toy robot is designed to gradually establish 1) joint attention using the direction of the robot's head and 2) eye-contact reactions from several sets of motion. From both subjective evaluations and observations of the user's gaze in the demonstration experiments, we found that i) joint attention draws the user's interest along with the user-guessed interest of the robot, ii) "eye contact" brings the user a favorable feeling for the robot, and iii) this feeling is enhanced when "eye contact" is used in combination with "joint attention." These results support the approach of our embodied gaze-communication model.

References

  1. Arrington Research. Viewpoint eye-tracker. http://arringtonresearch.com/index.html.Google ScholarGoogle Scholar
  2. C. Breazeal, D. Buchsbaum, J. Gray, D. Gatenby, and B. Blumberg. Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots. Autonomous Robots, 11, 2005. Issues 1--2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Castellini and G. Sndini. Gaze tracking for robotic control in intelligent teleoperation and prosthetics. In COGAIN 2006, pages 73--77, 2006.Google ScholarGoogle Scholar
  4. B. R. Duffy. Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3):177--190, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  5. A. Fukayama, T. Ohno, N. Mukawa, M. Sawaki, and N. Hagita. Messages embedded in gaze of interface agents -- impression management with agent's gaze--. In Proc. ACM SIGCHI2002, volume 1, pages 41--49, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. M. Imai, T. Ono, and H. Ishiguro. Physical relation and expression: Joint attention for human--robot interaction. In IEEE Int. Workshop on Robot and Human Communication, pages 512--517, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  7. A. Kendon. Some functions of gaze-direction in social interaction. Acta Psychologica, 26:22--63, 1967.Google ScholarGoogle ScholarCross RefCross Ref
  8. H. Kojima. Infanoid: A babybot that explores the social environment. Socially Intelligent Agents, K. Dautenhahn and A. H. Bond and L. Canamero and B. Edmonds, Kluwer Academic Publishers, pages 157--164, 2002.Google ScholarGoogle Scholar
  9. Y. Matsumoto and A. Zelinsky. An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement. In Proc. Int. Conf. Automatic Face and Gesture Recognition, pages 499--504, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Moore, P. J. Dunham, and P. Dunham. Joint Attention: Its Origins and Role in Development. Lawrence Erlbaum, 1995.Google ScholarGoogle Scholar
  11. D. Moore. 'it's like a gold medal and it's mine' - dolls in dementia care. Journal of Dementia Care, Vol 9 No.6 2001, 9(6):20--22, 2001.Google ScholarGoogle Scholar
  12. nac image technology Inc. eye mark recorderEMR-8B. http://www.ceatec.com/en/2005/news/ne_web_detail.html?volume=15.Google ScholarGoogle Scholar
  13. T. Ohno, N. Mukawa, and S. Kawato. Just blink your eyes: A head-free gaze tracking system. In Proc. CHI2003, pages 950--951, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. E. Pogalin, A. Redert, I. Patras, and E. Hendriks. Gaze tracking by using factorized likelihoods particle filtering and stereo vision. Proc. 3DPVT06, pages 57--64, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Ratjatawan. Tsunami children foundation (tcf) psychological support services, 2005. http://tsunamicf.org/psychological.htm.Google ScholarGoogle Scholar
  16. B. Scassellati. Theory of mind for a humanoid robot. Autonomous Robots, 12, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. D. Sekiguchi, M. Inami, and S. Tachi. RobotPHONE: RUI for interpersonal communication. In CHI2001 Extended Abstracts, pages 277--278, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Sidner, C. Lee, C. D. Kidd, N. Lesh, and C. Rich. Explorations in engagement for humans and robots. Artificial Intelligence, 12, 2005. Issues 1--2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. A. L. Thomaz, M. Berlin, and C. Breazeal. An embodied computational model of social referencing. ROMAN 2005, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  20. J. Wang, E. Sung, and R. Venkateswarlu. Estimating the eye gaze from one eye. Computer Vision and Image Understanding, 98(1):83--103, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. H. Yamazoe, A. Utsumi, and S. Abe. Remote gaze direction estimation with a single camera based on facial-feature tracking. Asian Conference on Computer Vision (ACCV2007) Demo Session, 2007, to apper.Google ScholarGoogle Scholar
  22. T. Yonezawa, N. Suzuki, S. Abe, K. Mase, and K. Kogure. Crossmodal coordination of expressive strength between voice and gesture for personified media. Proc. of ICMI06, pages 43--50, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. T. Yonezawa, H. Yamazoe, A. Utsumi, and S. Abe. Gazecoppet: Hierarchical gaze-communication in ambient space. In SIGGRAPH Proceedings DVD--ROM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Y. Yoshikawa, K. Shinozawa, H. Ishiguro, N. Hagita, and T. Miyamoto. The effects of responsive eye movement and blinking behavior in a communication robot. In Proc. IROS2006, pages 4564--4569, 2006.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICMI '07: Proceedings of the 9th international conference on Multimodal interfaces
      November 2007
      402 pages
      ISBN:9781595938176
      DOI:10.1145/1322192

      Copyright © 2007 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 November 2007

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • poster

      Acceptance Rates

      Overall Acceptance Rate453of1,080submissions,42%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader