Skip to main content
Log in

Analysis of landmarks in recognition of face expressions

  • Application Problems
  • Published:
Pattern Recognition and Image Analysis Aims and scope Submit manuscript

Abstract

Facial expression is a powerful mechanism used by humans to communicate their emotions, intentions, and opinions to each other. The recognition of facial expressions is extremely important for a responsive and socially interactive human-computer interface. Such an interface with a robust capability to recognize human facial expressions should enable an automated system to effectively deploy in a variety of applications, including human computer interaction, security, law enforcement, psychiatry, and education. In this paper, we examine several core problems in face expression analysis from the perspective of landmarks and distances between them using a statistical approach. We have used statistical analysis to determine the landmarks and features that are best suited to recognize the expressions in a face. We have used a standard database to examine the effectiveness of landmark based approach to classify an expression (a) when a face with a neutral expression is available, and (b) when there is no a priori information about the face.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. K. Anderson and P. W. McOwan, “A Real-Time Automated System for the Recognition of Human Facial Expressions,” IEEE Trans. Syst., Man Cybernet. B 36, 96–105 (2006).

    Article  Google Scholar 

  2. M. S. Bartlett, G. C. Littlewort, M. G. Frank, C. Lainscsek, I. R. Fasel, and J. R. Movellan, “Automated Face Analysis by Feature Point Tracking Has High Concurrent Validity with Manual FACS Coding,” J. Multimedia 1, 22–35 (2006).

    Google Scholar 

  3. M. H. Bindu, P. Gupta, and U. S. Tiwary, “Cognitive Model-Based Emotion Recognition from Facial Expressions for Live Human Computer Interaction,” in CIISP 2007: Proc. IEEE Symp. on Computational Intelligence in Image and Signal Processing, (Honolulu, 2007), pp. 351–356.

  4. P. Campadelli and R. Lanzarotti, “Fiducial Point Localization in Color Images of Face Foregrounds,” Image Vision Comput. 22, 863–872 (2004).

    Article  Google Scholar 

  5. I. Cohen, N. Sebe, A. Garg, L. S. Chen, and T. S. Huang, “Facial Expression from Video Sequences: Temporal and Static Modeling,” Computer Vision Image Understand. 91, 160–187 (2003).

    Article  Google Scholar 

  6. C. Darwin, The Expression of the Emotions in Man and Animals (Univ. Chicago Press, 1965).

  7. P. Ekman, “Facial Expression and Emotion,” Am. Psychol. 48, 384–392 (1993).

    Article  Google Scholar 

  8. P. Ekman and W. V. Friesen, “Constants across Cultures in the Face and Emotion,” J. Person. Social Psychol. 17, 124–129 (1971).

    Article  Google Scholar 

  9. P. Ekman and W. V. Friesen, Unmasking the Face: A Guide to Recognizing Emotions from Clues (PrenticeHall, Englewood Cliffs, 1975).

    Google Scholar 

  10. P. Ekman and W. V. Friesen, Facial Action Coding System: A Technique for the Measurement of Facial Movement (Consulting Psychologists Press, Palo Alto, 1978).

    Google Scholar 

  11. R. El Kaliouby and P. Robinson, “Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures,” in CVPRW 04: Proc. IEEE Computer Soc. Conf. on Computer Vision and Pattern Recognition Workshops (Washington, 2004), p. 154.

  12. I. A. Essa and A. P. Pentland, “Coding, Analysis, Interpretation and Recognition of Facial Expressions,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 757–763 (1997).

    Article  Google Scholar 

  13. L. G. Farkas, Anthropometry of the Head and Face (Raven Press, New York, 1994).

    Google Scholar 

  14. A. J. Fridlund, Human Facial Expression: An Evolutionary View (Acad. Press, San Diego, 1994).

    Google Scholar 

  15. S. B. Gokturk, J.-Y. Bouguet, C. Tomasi, and B. Girod, “Model-Based Face Tracking for View-Independent Facial Expression Recognition,” in Proc. 5th Int. Conf. on Automatic Face and Gesture Recognition (Washington, 2002), p. 287.

  16. H. Gu, Y. Zhang, and Q. Ji, “Task Oriented Facial Behavior Recognition with Selective Sensing,” Comp. Vision Image Understand. 100, 385–415 (2005).

    Article  Google Scholar 

  17. H. Hong, H. Neven, and C. von der Malsburg, “Online Facial Expression Recognition Based on Personalized Galleries,” in Proc. 3rd IEEE Int. Conf. on Automatic Face and Gesture Recognition (Nara, 1998), pp. 354–359.

  18. W. James, “What Is an Emotion?,” Mind 9, 188–205 (1984).

    Google Scholar 

  19. T. Kanade, J. Cohn, and Y. Tian, “Comprehensive Database for Facial Expression Analysis,” in Proc. 4th IEEE Int. Conf. on Automatic Face and Gesture Recognition (Grenoble, 2000), pp. 46–53.

  20. I. Kotsia, N. Nikolaidis, and I. Pitas, “Facial Expression Recognition in Videos Using a Novel Multi-Class Support Vector Machines Variant,” in ICASSP 2007: Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (Honolulu, 2007), Vol. 2, pp. 585–588.

  21. I. Kotsia and I. Pitas, “Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines,” IEEE Trans. Image Processing 16, 172–187 (2007).

    Article  MathSciNet  Google Scholar 

  22. C. Landis, “Studies of Emotional Reactions: II. General Behavior and Facial Expression,” J. Comparative Psychol. 4, 447–510 (1924).

    Article  Google Scholar 

  23. J. J. Lien, T. Kanade, J. Cohn, and C. Li, “Detection, Tracking and Classification of Action Units in Facial Expression,” J. Robotics Autonom. Syst. 31, 131–146 (2000).

    Article  Google Scholar 

  24. B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” in Int. Joint Conf. on Artificial Intelligence (Vancouver, 1981), pp. 674–679.

  25. Machine Perception Laboratory, MPLab GENKI-4K Face, Expression, and Pose Dataset, Available from: http://mplab.ucsd.edu/wordpress/?page-id=398

  26. D. Matsumoto and P. Ekman, Japanese and Caucasian Facial Expressions of Emotion (JACFEE) (Intercultural and Emotion Research Laboratory, Department of Psychology, San Francisco State Univ., 1998) (Unpublished Slide Set).

  27. P. Michel and R. El Kaliouby, “Real Time Facial Expression Recognition in Video Using Support Vector Machines,” in Proc. 5th Int. Conf. on Multimodal Interfaces (Vancouver, 2003), pp. 258–264.

  28. M. Pantic and I. Patras, “Detecting Facial Actions and Their Temporal Segments in Nearly Frontal-View Face Images Sequences,” in Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (Hawaii, 2005), pp. 3358–3363.

  29. M. Pantic and I. Patras, “Dynamics of Facial Expression: Recognition of Facial Actions and Their Temporal Segments from Face Profile Image Sequences,” IEEE Trans. Syst., Man Cybern. B 36, 433–449 (2006).

    Article  Google Scholar 

  30. M. Pantic and L. J. M. Rothkrantz, “Automatic Analysis of Facial Expressions: The State of the Art,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1424–1445 (2000).

    Article  Google Scholar 

  31. M. Pantic and L. J. M. Rothkrantz, “Expert System for Automatic Analysis of Facial Expression,” Image Vision Comput. 18, 881–905 (2000).

    Article  Google Scholar 

  32. M. Pantic, M. F. Valstar, R. Rademaker, and L. Maat, “Webbased Database for Facial Expression Analysis,” in Proc. IEEE Int. Conf. on Multimedia and Expo (ICME’05) (Amsterdam, 2005), pp. 317–321.

  33. A. Pentland, B. Moghaddam, and T. Starner, “ViewBased and Modular Eigenspaces for Face Recognition,” in CVPR’94: Proc. IEEE Int. Computer Soc. Conf. on Computer Vision and Pattern Recognition (Seattle, 1994), pp. 84–91.

  34. A. Samal, V. Subramanian, and D. Marx, “Sexual Dimorphism in Human Faces,” J. Visual Commun. Image Rep. 18, 453–463 (2007).

    Article  Google Scholar 

  35. H. Seyedarabi, A. Aghagolzadeh, and S. Khanmohammadi, “Recognition of Six Basic Facial Expressions by Feature-Point Tracking Using RBF Neural Network and Fuzzy Inference System,” in ICME 04: Proc. IEEE Int. Conf. on Multimedia and Expo (Taipei, 2004), pp. 1219–1222.

  36. J. Shi, A. Samal, and D. Marx, “Face Recognition Using Landmark-Based Bidimensional Regression,” Comp. Vision Image Understand. 102, 117–133 (2006).

    Article  Google Scholar 

  37. J. Shi and C. Tomasi, “Good Features to Track,” in CVPR’94: Proc. IEEE Computer Soc. Conf. on Computer Vision and Pattern Recognition (Seattle, 1994), pp. 593–600.

  38. J. Steffens, E. Elagin, and H. Neven, “PersonSpotter — Fast and Robust System for Human Detection, Tracking and Recognition,” in Proc. 3rd Int. Conf. on Automatic Face and Gesture Recognition (Nara, 1998), pp. 516–521.

  39. M. Suwa, N. Sugie, and K. Fujimora, “A Preliminary Note on Pattern Recognition of Human Emotional Recognition,” in Proc. 4th Int. Joint Conf. on Pattern Recognition (Kyoto, 1978), pp. 408–410.

  40. H. Tao and T. S. Huan, “Connected Vibrations: A Modal Analysis Approach to Non-Rigid Motion Tracking,” in CVPR 98: Proc. IEEE Computer Soc. Conf. on Computer Vision and Pattern Recognition (Santa Barbara, 1998), pp. 753–750.

  41. Y.-L. Tian, T. Kanade, and J. F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 97–115 (2001).

    Article  Google Scholar 

  42. C. Tomasi and T. Kanade, “Detection and Tracking of Point Features,” Carnegie Mellon Univ. Tech. Rep. No. CMU-CS-91-132 (1991).

  43. S. S. Tomkins, “The Role of Facial Response in the Experience of Emotion: A Reply to Tourangeau and Ellsworth,” J. Person. Social Psychol. 40, 355–357 (1981).

    Article  Google Scholar 

  44. S. S. Tomkins, “Affect Theory,” in Emotion in the Human Face, Ed. by P. Ekman, 2nd ed. (Cambridge Univ. Press, 1982).

  45. M. F. Valstar and M. Pantic, “Combined Support Vector Machines and Hidden Markov Models for Modeling Facial Action Temporal Dynamics,” in Proc. IEEE Int. Workshop on Human-Computer Interaction (Rio de Janeiro, 2007), pp. 188–197.

  46. Q. Xu, P. Zhang, W. Pei, L. Yang, and Z. He, “An Automatic Facial Expression Recognition Approach Based on Confusion-Crossed Support Vector Machine Tree,” in ICASSP 2007: Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (Honolulu, 2007), pp. 625–628.

  47. Y.-L. Xue, X. Mao, and F. Zhang, “Beihang University Facial Expression Database and Multiple Facial Expression Recognition,” in Proc. 5th Int. Conf. on Machine Learning and Cybernetics (Dalian, 2006), pp. 3282–3287.

  48. P. Yang, Q. Liu, and D. N. Metaxas, “Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition,” in Proc. CVPR (Minneapolis, 2007).

  49. M. Yeasin, B. Bullot, and R. Sharma, “Recognition of Facial Expressions and Measurement of Levels of Interest from Video,” IEEE Trans. Multimedia 8, 500–508 (2006).

    Article  Google Scholar 

  50. Y. Zhang and Q. Li, “Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 699–714 (2005).

    Article  Google Scholar 

  51. G. Zhao and M. Pietikainen, “Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 915–928 (2007).

    Article  Google Scholar 

  52. H. Zhao, Z. Wang, and J. Men, “Facial Complex Expression Recognition Based on Fuzzy Kernel Clustering and Support Vector Machines,” in ICNS 07: Proc. 3rd Int. Conf. on Natural Computation (Haikou, 2007), pp. 562–566.

  53. G. Zhou, Y. Zhan, and J. Zhang, “Facial Expression Recognition Based on Selective Feature Extraction, in ISDA 06: Proc. 6th Int. Conf. on Intelligent Systems Design and Application (Jinan, 2006), pp. 412–417.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. Alugupally.

Additional information

The article is published in the original.

Nripen Alugupally completed his MS from the Department of Computer Science and Engineering at the University of Nebraska-Lincoln. He is currently working as a Software Engineer at Pacific Gas & Electric.

Sanjiv Bhatia is an Associate Professor in the Department of Computer Science and Mathematics at the University of Missouri at St. Louis. His research interests include algorithms, computer vision, and image analysis.

David Marx is a Professor in the Statistics Department at the University of Nebraska-Lincoln. His research interests include spatial statistics, linear and non-linear models, and biometrics. He has authored or co-authored over 150 papers.

Ashok Samal is a Professor in the Department of Computer Science and Engineering at the University of Nebraska-Lincoln. His research interests include image analysis including biometrics, document image analysis and computer vision applications. He has published extensively in these areas.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Alugupally, N., Samal, A., Marx, D. et al. Analysis of landmarks in recognition of face expressions. Pattern Recognit. Image Anal. 21, 681–693 (2011). https://doi.org/10.1134/S105466181104002X

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S105466181104002X

Keywords

Navigation