Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

Bilinear factorisation for facial expression analysis and synthesis

Bilinear factorisation for facial expression analysis and synthesis

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IEE Proceedings - Vision, Image and Signal Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

This paper addresses the issue of face representations for facial expression recognition and synthesis. In this context, a global active appearance model is used in conjunction with two bilinear factorisation models to separate expression and identity factors from the global appearance parameters. Although active appearance models and bilinear modelling are not new concepts, the main contribution of this paper consists in combining both techniques to improve facial expression recognition and synthesis (control). Indeed, facial expression recognition is performed through linear discriminant analysis of the global appearance parameters extracted by active appearance model search. Results are compared to the ones obtained for the same training and test images using classification of the expression factors extracted by bilinear factorisation. This experiment highlights the advantages of bilinear factorisation. Finally, it is proposed to exploit bilinear factorisation to synthesise facial expressions through replacement of the extracted expression factors. This yields very interesting synthesis performances in terms of visual quality of the synthetic faces. Indeed, synthetic open mouths reconstruction either with or without appearing teeth is of better quality than with classical linear-regression-based synthesis.

References

    1. 1)
      • Chuang, E.S., Deshpande, H., Bregler, C.: `Facial expression space learning', Tenth Pacific Conf. Comput. Graphics and Appl., October 2002.
    2. 2)
      • Abboud, B., Davoine, F., Dang, M.: `Expressive face recognition and synthesis', IEEE CVPR workshop Comput. Vis. Pattern Recognit. for Human Comput. Interaction, June 2003, Madison, U.S.A..
    3. 3)
      • Edwards, G.J., Cootes, T.F., Taylor, C.J.: `Face recognition using active appearance models', IEEE Eur. Conf. Comput. Vis, 1998, 2, p. 581–695.
    4. 4)
      • Belhumeur, P.N., Hespanha, J., Kriegman, D.J.: `Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection', IEEE Eur. Conf. Comput. Vis., 1996, p. 45–58.
    5. 5)
      • Dubuisson, S.: `Analyse d'expressions faciales', 2001, PhD thesis, Universite de Technologie de Compiegne.
    6. 6)
      • P. Ekman . (1999) Facial expressions, Handbook of cognition and emotion.
    7. 7)
      • B. Abboud , F. Davoine , M. Dang . Facial expression recognition and synthesis based on an appearance model. Sig. Process. Image Commun. , 8 , 723 - 740
    8. 8)
      • D.H. Marimont , B.A. Wandell . Linear models of surface and illuminant spectra. J. Opt. Soc. Am. A. , 9 , 1905 - 1913
    9. 9)
      • Cootes, T.F., Kittipanya-ngam, P.: `Comparing variations on the active appearance model algorithm', British Mach. Vis. Conf.,, September 2002, Cardiff University, p. 837–846.
    10. 10)
      • J.B. Tenenbaum , W.T. Freeman . Separating style and content with bilinear models. Adv. Neural Inf. Process. Syst. , 662 - 668
    11. 11)
      • I. Pandzic , R. Forchheimer . (2002) MPEG-4 facial animation: the standard, implementation and applications.
    12. 12)
      • A. Mehrabian . Communication without words. Psychology Today , 4 , 53 - 56
    13. 13)
      • Wang, H., Ahuja, N.: `Facial expression decomposition', IEEE Int. Conf. Comput. Vis., September 2003, Nice, France.
    14. 14)
      • Kanade, T., Cohn, J., Tian, Y.L.: `Comprehensive database for facial expression analysis', Int. Conf. Automatic Face and Gesture Recognit, March 2000, Grenoble, France, p. 46–53.
    15. 15)
      • S. Dubuisson , F. Davoine , M. Masson . A solution for facial expression representation and recognition. Sig. Process. Image Commun. , 9 , 657 - 673
    16. 16)
      • A.M. Martinez , A.C. Kak . PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. , 2 , 228 - 233
    17. 17)
      • Bascle, B., Blake, A.: `Separability of pose and expression in facial tracking and animation', IEEE Int. Conf. Comput. Vis., 1998, p. 323–328.
http://iet.metastore.ingenta.com/content/journals/10.1049/ip-vis_20045060
Loading

Related content

content/journals/10.1049/ip-vis_20045060
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address