Skip to main content

Generating Embodied Descriptions Tailored to User Preferences

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4722))

Abstract

We describe two user studies designed to measure the impact of using the characteristic displays of a speaker expressing different user-preference evaluations to select the head and eye behaviour of an animated talking head. In the first study, human judges were reliably able to identify positive and negative evaluations based only on the motions of the talking head. In the second study, subjects generally preferred positive displays to accompany positive sentences and negative displays to accompany negative ones, and showed a particular dislike for negative facial displays accompanying positive sentences.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Swerts, M., Krahmer, E.: On the perception of audiovisual cues to prominence (in press)

    Google Scholar 

  2. Rehm, M., André, E.: Catch me if you can – exploring lying agents in social settings. In: Proc. AAMAS 2005 (2005)

    Google Scholar 

  3. Marsi, E., van Rooden, F.: Expressing uncertainty with a talking head. In: Proc. MOG 2007 (2007)

    Google Scholar 

  4. Berry, D.C., Butler, L., de Rosis, F., Laaksolathi, J., Pelachaud, C., Steedman, M.: Final evaluation report. Deliverable 4.6, MagiCster project (2004)

    Google Scholar 

  5. Foster, M.E.: Associating facial displays with syntactic constituents for generation. In: Proc. ACL 2007 Linguistic Annotation Workshop (2007)

    Google Scholar 

  6. DeCarlo, D., Stone, M., Revilla, C., Venditti, J.: Specifying and animating facial signals for discourse in embodied conversational agents. Computer Animation and Virtual Worlds 15(1), 27–38 (2004)

    Article  Google Scholar 

  7. Clark, R.A.J., Richmond, K., King, S.: Festival 2 – build your own general purpose unit selection speech synthesiser. In: Proc. ISCA Speech Synthesis Workshop (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Catherine Pelachaud Jean-Claude Martin Elisabeth André Gérard Chollet Kostas Karpouzis Danielle Pelé

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Foster, M.E. (2007). Generating Embodied Descriptions Tailored to User Preferences. In: Pelachaud, C., Martin, JC., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds) Intelligent Virtual Agents. IVA 2007. Lecture Notes in Computer Science(), vol 4722. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74997-4_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74997-4_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74996-7

  • Online ISBN: 978-3-540-74997-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics