Skip to main content

Control of Speech-Related Facial Movements of an Avatar from Video

  • Conference paper
  • 2827 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6895))

Abstract

Several puppetry techniques have been recently proposed to transfer emotional facial expressions to an avatar from a user’s video stream. Correspondence functions between landmarks extracted from tracking and MPEG-4 Facial Animation Parameters driving the 3D avatar’s facial expressions [1] have been proposed. More recently, Saragih and colleagues [2] proposed a real-time puppetry method using only a single image of the avatar and user.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Baptista Queiroz, R., Braun, A., Moreira, J., Cohen, M., Musse, S.R., Thielo, M.R., Samadani, R.: Reflecting User Faces in Avatars. In: Allbeck, J., et al. (eds.) IVA 2010. LNCS, vol. 6356, pp. 420–426. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  2. Saragih, J.M., Lucey, S., Cohn, J.F.: Real-time avatar animation from a single image. Automatic Face and Gesture Recognition. Santa Barbara, CA (2011)

    Google Scholar 

  3. McGurk, H., MacDonald, J.: Hearing lips and seeing voices. Nature 264, 746–748 (1976)

    Article  Google Scholar 

  4. Milborrow, S., Nicolls, F.: Locating Facial Features with an Extended Active Shape Model. In: European Conference on Computer Vision, Marseille, France, pp. 504–513 (2008)

    Google Scholar 

  5. Reveret, L., Bailly, G., Badin, P.: MOTHER: A new generation of talking heads providing a flexible articulatory control for video-realistic speech animation. In: 6th Int. Conference of Spoken Language Processing, ICSLP 2000, Beijing, China (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

1 Electronic Supplementary Material

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gibert, G., Stevens, C.J. (2011). Control of Speech-Related Facial Movements of an Avatar from Video. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds) Intelligent Virtual Agents. IVA 2011. Lecture Notes in Computer Science(), vol 6895. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23974-8_55

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23974-8_55

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23973-1

  • Online ISBN: 978-3-642-23974-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics