Skip to main content

Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7403))

Abstract

In this chapter, we first present a summary of findings from two previous studies on the limitations of using flat displays with embodied conversational agents (ECAs) in the contexts of face-to-face human-agent interaction. We then motivate the need for a three dimensional display of faces to guarantee accurate delivery of gaze and directional movements and present Furhat, a novel, simple, highly effective, and human-like back-projected robot head that utilizes computer animation to deliver facial movements, and is equipped with a pan-tilt neck. After presenting a detailed summary on why and how Furhat was built, we discuss the advantages of using optically projected animated agents for interaction. We discuss using such agents in terms of situatedness, environment, context awareness, and social, human-like face-to-face interaction with robots where subtle nonverbal and social facial signals can be communicated. At the end of the chapter, we present a recent application of Furhat as a multimodal multiparty interaction system that was presented at the London Science Museum as part of a robot festival,. We conclude the paper by discussing future developments, applications and opportunities of this technology.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dominik, Z.: Who did actually invent the word robotand what does it mean? The Karel Čapek website, http://capek.misto.cz/english/robot.html (retrieved December 10, 2011)

  2. Summerfield, Q.: Lipreading and audio-visual speech perception. Philosophical Transactions: Biological Sciences 335(1273), 71–78 (1992)

    Article  Google Scholar 

  3. Al Moubayed, S., Beskow, J.: Effects of Visual Prominence Cues on Speech Intelligibility. In: Proceedings of Auditory-Visual Speech Processing, AVSP 2009, Norwich, England (2009)

    Google Scholar 

  4. Argyle, M., Cook, M.: Gaze and mutual gaze. Cambridge University Press (1976)

    Google Scholar 

  5. Kleinke, C.L.: Gaze and eye contact: a research review. Psychological Bulletin 100, 78–100 (1986)

    Article  Google Scholar 

  6. Ekman, P., Friesen, W.V.: Unmasking the face: A guide to recognizing emotions from facial clues. Malor Books (2003) ISBN: 978-1883536367

    Google Scholar 

  7. Shinozawa, K., Naya, F., Yamato, J., Kogure, K.: Differences in effect of robot and screen agent recommendations on human decision-making. International Journal of Human Computer Studies 62(2), 267–279 (2005)

    Article  Google Scholar 

  8. Mori, M.: Bukimi no tani.:The uncanny valley (K. F. MacDorman & T. Minato, Trans.). Energy 7(4), 33–35 (1970) (Originally in Japanese)

    Google Scholar 

  9. Gockley, R., Simmons, J., Wang, D., Busquets, C., DiSalvo, K., Caffrey, S., Rosenthal, J., Mink, S., Thomas, W., Adams, T., Lauducci, M., Bugajska, D., Perzanowski, Schultz, A.: Grace and George: Social Robots at AAAI. In: Proceedings of AAAI 2004, Mobile Robot Competition Workshop, pp. 15–20. AAAI Press (2004)

    Google Scholar 

  10. Edlund, J., Al Moubayed, S., Beskow, J.: The Mona Lisa Gaze Effect as an Objective Metric for Perceived Cospatiality. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS (LNAI), vol. 6895, pp. 439–440. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  11. Todorovi, D.: Geometrical basis of perception of gaze direction. Vision Research 45(21), 3549–3562 (2006)

    Article  Google Scholar 

  12. Raskar, R., Welch, G., Low, K.-L., Bandyopadhyay, D.: Shader lamps: animating real objects with image-based illumination. In: Proc. of the 12th Eurographics Workshop on Rendering Techniques, pp. 89–102 (2001)

    Google Scholar 

  13. Lincoln, P., Welch, G., Nashel, A., Ilie, A., State, A., Fuchs, H.: Animatronic shader lamps avatars. In: Proc. of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2009). IEEE Computer Society, Washington, DC (2009)

    Google Scholar 

  14. Al Moubayed, S., Edlund, J., Beskow, J.: Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections. ACM Trans. Interact. Intell. Syst. 1(2), Article 11, 25 pages (2012)

    Google Scholar 

  15. Al Moubayed, S., Skantze, G.: Turn-taking Control Using Gaze in Multiparty Human-Computer Dialogue: Effects of 2D and 3D Displays. In: Proceedings of the International Conference on Auditory-Visual Speech Processing AVSP, Florence, Italy (2011)

    Google Scholar 

  16. Al Moubayed, S., Beskow, J., Edlund, J., Granström, B., House, D.: Animated Faces for Robotic Heads: Gaze and Beyond. In: Esposito, A., Vinciarelli, A., Vicsi, K., Pelachaud, C., Nijholt, A. (eds.) Communication and Enactment 2010. LNCS, vol. 6800, pp. 19–35. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  17. Beskow, J.: Talking heads - Models and applications for multimodal speech synthesis. Doctoral dissertation, KTH (2003)

    Google Scholar 

  18. Beskow, J.: Animation of talking agents. In: Benoit, C., Campbel, R. (eds.) Proc of ESCA Workshop on Audio-Visual Speech Processing, Rhodes, Greece, pp. 149–152 (1997)

    Google Scholar 

  19. Granström, B., House, D.: Modeling and evaluating verbal and non-verbal communication in talking animated interface agents. In: Dybkjaer, l., Hemsen, H., Minker, W. (eds.) Evaluation of Text and Speech Systems, pp. 65–98. Springer (2007)

    Google Scholar 

  20. Al Moubayed, S., Beskow, J., Granström, B.: Auditory-Visual Prominence: From Intelligibility to Behavior. Journal on Multimodal User Interfaces 3(4), 299–311 (2010)

    Article  Google Scholar 

  21. Brouwer, D.M., Bennik, J., Leideman, J., Soemers, H.M.J.R., Stramigioli, S.: Mechatronic Design of a Fast and Long Range 4 Degrees of Freedom Humanoid Neck. In: Proceedings of ICRA, Kobe, Japan, ThB8.2, pp. 574–579 (2009)

    Google Scholar 

  22. Harel, D.: Statecharts: A visual formalism for complex systems. Science of Computer Programming 8(3), 231–274 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  23. Blackwell, R.D., Hensel, J.S., Sternthal, B.: Pupil dilation: What does it measure? Journal of Advertising Research 10, 15–18 (1970)

    Google Scholar 

  24. Nishino, K., Nayar, S.K.: Corneal Imaging System: Environment from Eyes. Int. J. Comput. Vision 70(1), 23–40 (2006), doi:10.1007/s11263-006-6274-9

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Al Moubayed, S., Beskow, J., Skantze, G., Granström, B. (2012). Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction. In: Esposito, A., Esposito, A.M., Vinciarelli, A., Hoffmann, R., Müller, V.C. (eds) Cognitive Behavioural Systems. Lecture Notes in Computer Science, vol 7403. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34584-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34584-5_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34583-8

  • Online ISBN: 978-3-642-34584-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics