Skip to main content

Motion Capture Synthesis with Adversarial Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10498))

Abstract

We propose a new statistical modeling approach that we call Sequential Adversarial Auto-encoder (SAAE) for learning a synthesis model for motion sequences. This model exploits the adversarial idea that has been popularized in the machine learning field for learning accurate generative models. We further propose a conditional variant of this model that takes as input an additional information such as the activity which is performed in a sequence, or the emotion with which it is performed, and which allows to perform synthesis in context.

We are very grateful to Catherine Pélachaud for fruitful discussion and for access to and help with the Emilya dataset

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Denton, E., Chintala, S., Szlam, A., Fergus, R.: Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. Arxiv, pp. 1–10 (2015)

    Google Scholar 

  2. Ding, Y., Prepin, K., Huang, J., Pelachaud, C., Artières, T.: Laughter animation synthesis. In: AAMAS (2014)

    Google Scholar 

  3. Fourati, N., Pelachaud, C.: Emilya: emotional body expression in daily actions database. In: LREC, pp. 3486–3493 (2014)

    Google Scholar 

  4. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems 27, pp. 2672–2680 (2014)

    Google Scholar 

  5. Grochow, K., Martin, S.L., Hertzmann, A., Popovic, Z.: Style-based inverse kinematics. ACM Transactions on Graphics 23(3), 522–531 (2004)

    Article  Google Scholar 

  6. Hofer, G., Shimodaira, H.: Automatic head motion prediction from speech data. In: INTERSPEECH, pp. 722–725 (2007)

    Google Scholar 

  7. Holden, D., Saito, J., Komura, T.: A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics 35(4), 1–11 (2016)

    Article  Google Scholar 

  8. Huang, J., Wang, Q., Fratarcangeli, M., Yan, K., Pelachaud, C.: Multi-variate gaussian-based inverse kinematics. In: Computer Graphics Forum (2017)

    Google Scholar 

  9. Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I.: Adversarial Autoencoders, pp. 1–10 (2015). arXiv: http://arxiv.org/abs/1511.05644

  10. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS, pp. 3104–3112 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qi Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Wang, Q., Artières, T. (2017). Motion Capture Synthesis with Adversarial Learning. In: Beskow, J., Peters, C., Castellano, G., O'Sullivan, C., Leite, I., Kopp, S. (eds) Intelligent Virtual Agents. IVA 2017. Lecture Notes in Computer Science(), vol 10498. Springer, Cham. https://doi.org/10.1007/978-3-319-67401-8_60

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67401-8_60

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67400-1

  • Online ISBN: 978-3-319-67401-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics