Abstract
Facial expression is a powerful mechanism used by humans to communicate their emotions, intentions, and opinions to each other. The recognition of facial expressions is extremely important for a responsive and socially interactive human-computer interface. Such an interface with a robust capability to recognize human facial expressions should enable an automated system to effectively deploy in a variety of applications, including human computer interaction, security, law enforcement, psychiatry, and education. In this paper, we examine several core problems in face expression analysis from the perspective of landmarks and distances between them using a statistical approach. We have used statistical analysis to determine the landmarks and features that are best suited to recognize the expressions in a face. We have used a standard database to examine the effectiveness of landmark based approach to classify an expression (a) when a face with a neutral expression is available, and (b) when there is no a priori information about the face.
Similar content being viewed by others
References
K. Anderson and P. W. McOwan, “A Real-Time Automated System for the Recognition of Human Facial Expressions,” IEEE Trans. Syst., Man Cybernet. B 36, 96–105 (2006).
M. S. Bartlett, G. C. Littlewort, M. G. Frank, C. Lainscsek, I. R. Fasel, and J. R. Movellan, “Automated Face Analysis by Feature Point Tracking Has High Concurrent Validity with Manual FACS Coding,” J. Multimedia 1, 22–35 (2006).
M. H. Bindu, P. Gupta, and U. S. Tiwary, “Cognitive Model-Based Emotion Recognition from Facial Expressions for Live Human Computer Interaction,” in CIISP 2007: Proc. IEEE Symp. on Computational Intelligence in Image and Signal Processing, (Honolulu, 2007), pp. 351–356.
P. Campadelli and R. Lanzarotti, “Fiducial Point Localization in Color Images of Face Foregrounds,” Image Vision Comput. 22, 863–872 (2004).
I. Cohen, N. Sebe, A. Garg, L. S. Chen, and T. S. Huang, “Facial Expression from Video Sequences: Temporal and Static Modeling,” Computer Vision Image Understand. 91, 160–187 (2003).
C. Darwin, The Expression of the Emotions in Man and Animals (Univ. Chicago Press, 1965).
P. Ekman, “Facial Expression and Emotion,” Am. Psychol. 48, 384–392 (1993).
P. Ekman and W. V. Friesen, “Constants across Cultures in the Face and Emotion,” J. Person. Social Psychol. 17, 124–129 (1971).
P. Ekman and W. V. Friesen, Unmasking the Face: A Guide to Recognizing Emotions from Clues (PrenticeHall, Englewood Cliffs, 1975).
P. Ekman and W. V. Friesen, Facial Action Coding System: A Technique for the Measurement of Facial Movement (Consulting Psychologists Press, Palo Alto, 1978).
R. El Kaliouby and P. Robinson, “Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures,” in CVPRW 04: Proc. IEEE Computer Soc. Conf. on Computer Vision and Pattern Recognition Workshops (Washington, 2004), p. 154.
I. A. Essa and A. P. Pentland, “Coding, Analysis, Interpretation and Recognition of Facial Expressions,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 757–763 (1997).
L. G. Farkas, Anthropometry of the Head and Face (Raven Press, New York, 1994).
A. J. Fridlund, Human Facial Expression: An Evolutionary View (Acad. Press, San Diego, 1994).
S. B. Gokturk, J.-Y. Bouguet, C. Tomasi, and B. Girod, “Model-Based Face Tracking for View-Independent Facial Expression Recognition,” in Proc. 5th Int. Conf. on Automatic Face and Gesture Recognition (Washington, 2002), p. 287.
H. Gu, Y. Zhang, and Q. Ji, “Task Oriented Facial Behavior Recognition with Selective Sensing,” Comp. Vision Image Understand. 100, 385–415 (2005).
H. Hong, H. Neven, and C. von der Malsburg, “Online Facial Expression Recognition Based on Personalized Galleries,” in Proc. 3rd IEEE Int. Conf. on Automatic Face and Gesture Recognition (Nara, 1998), pp. 354–359.
W. James, “What Is an Emotion?,” Mind 9, 188–205 (1984).
T. Kanade, J. Cohn, and Y. Tian, “Comprehensive Database for Facial Expression Analysis,” in Proc. 4th IEEE Int. Conf. on Automatic Face and Gesture Recognition (Grenoble, 2000), pp. 46–53.
I. Kotsia, N. Nikolaidis, and I. Pitas, “Facial Expression Recognition in Videos Using a Novel Multi-Class Support Vector Machines Variant,” in ICASSP 2007: Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (Honolulu, 2007), Vol. 2, pp. 585–588.
I. Kotsia and I. Pitas, “Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines,” IEEE Trans. Image Processing 16, 172–187 (2007).
C. Landis, “Studies of Emotional Reactions: II. General Behavior and Facial Expression,” J. Comparative Psychol. 4, 447–510 (1924).
J. J. Lien, T. Kanade, J. Cohn, and C. Li, “Detection, Tracking and Classification of Action Units in Facial Expression,” J. Robotics Autonom. Syst. 31, 131–146 (2000).
B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” in Int. Joint Conf. on Artificial Intelligence (Vancouver, 1981), pp. 674–679.
Machine Perception Laboratory, MPLab GENKI-4K Face, Expression, and Pose Dataset, Available from: http://mplab.ucsd.edu/wordpress/?page-id=398
D. Matsumoto and P. Ekman, Japanese and Caucasian Facial Expressions of Emotion (JACFEE) (Intercultural and Emotion Research Laboratory, Department of Psychology, San Francisco State Univ., 1998) (Unpublished Slide Set).
P. Michel and R. El Kaliouby, “Real Time Facial Expression Recognition in Video Using Support Vector Machines,” in Proc. 5th Int. Conf. on Multimodal Interfaces (Vancouver, 2003), pp. 258–264.
M. Pantic and I. Patras, “Detecting Facial Actions and Their Temporal Segments in Nearly Frontal-View Face Images Sequences,” in Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (Hawaii, 2005), pp. 3358–3363.
M. Pantic and I. Patras, “Dynamics of Facial Expression: Recognition of Facial Actions and Their Temporal Segments from Face Profile Image Sequences,” IEEE Trans. Syst., Man Cybern. B 36, 433–449 (2006).
M. Pantic and L. J. M. Rothkrantz, “Automatic Analysis of Facial Expressions: The State of the Art,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1424–1445 (2000).
M. Pantic and L. J. M. Rothkrantz, “Expert System for Automatic Analysis of Facial Expression,” Image Vision Comput. 18, 881–905 (2000).
M. Pantic, M. F. Valstar, R. Rademaker, and L. Maat, “Webbased Database for Facial Expression Analysis,” in Proc. IEEE Int. Conf. on Multimedia and Expo (ICME’05) (Amsterdam, 2005), pp. 317–321.
A. Pentland, B. Moghaddam, and T. Starner, “ViewBased and Modular Eigenspaces for Face Recognition,” in CVPR’94: Proc. IEEE Int. Computer Soc. Conf. on Computer Vision and Pattern Recognition (Seattle, 1994), pp. 84–91.
A. Samal, V. Subramanian, and D. Marx, “Sexual Dimorphism in Human Faces,” J. Visual Commun. Image Rep. 18, 453–463 (2007).
H. Seyedarabi, A. Aghagolzadeh, and S. Khanmohammadi, “Recognition of Six Basic Facial Expressions by Feature-Point Tracking Using RBF Neural Network and Fuzzy Inference System,” in ICME 04: Proc. IEEE Int. Conf. on Multimedia and Expo (Taipei, 2004), pp. 1219–1222.
J. Shi, A. Samal, and D. Marx, “Face Recognition Using Landmark-Based Bidimensional Regression,” Comp. Vision Image Understand. 102, 117–133 (2006).
J. Shi and C. Tomasi, “Good Features to Track,” in CVPR’94: Proc. IEEE Computer Soc. Conf. on Computer Vision and Pattern Recognition (Seattle, 1994), pp. 593–600.
J. Steffens, E. Elagin, and H. Neven, “PersonSpotter — Fast and Robust System for Human Detection, Tracking and Recognition,” in Proc. 3rd Int. Conf. on Automatic Face and Gesture Recognition (Nara, 1998), pp. 516–521.
M. Suwa, N. Sugie, and K. Fujimora, “A Preliminary Note on Pattern Recognition of Human Emotional Recognition,” in Proc. 4th Int. Joint Conf. on Pattern Recognition (Kyoto, 1978), pp. 408–410.
H. Tao and T. S. Huan, “Connected Vibrations: A Modal Analysis Approach to Non-Rigid Motion Tracking,” in CVPR 98: Proc. IEEE Computer Soc. Conf. on Computer Vision and Pattern Recognition (Santa Barbara, 1998), pp. 753–750.
Y.-L. Tian, T. Kanade, and J. F. Cohn, “Recognizing Action Units for Facial Expression Analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 97–115 (2001).
C. Tomasi and T. Kanade, “Detection and Tracking of Point Features,” Carnegie Mellon Univ. Tech. Rep. No. CMU-CS-91-132 (1991).
S. S. Tomkins, “The Role of Facial Response in the Experience of Emotion: A Reply to Tourangeau and Ellsworth,” J. Person. Social Psychol. 40, 355–357 (1981).
S. S. Tomkins, “Affect Theory,” in Emotion in the Human Face, Ed. by P. Ekman, 2nd ed. (Cambridge Univ. Press, 1982).
M. F. Valstar and M. Pantic, “Combined Support Vector Machines and Hidden Markov Models for Modeling Facial Action Temporal Dynamics,” in Proc. IEEE Int. Workshop on Human-Computer Interaction (Rio de Janeiro, 2007), pp. 188–197.
Q. Xu, P. Zhang, W. Pei, L. Yang, and Z. He, “An Automatic Facial Expression Recognition Approach Based on Confusion-Crossed Support Vector Machine Tree,” in ICASSP 2007: Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (Honolulu, 2007), pp. 625–628.
Y.-L. Xue, X. Mao, and F. Zhang, “Beihang University Facial Expression Database and Multiple Facial Expression Recognition,” in Proc. 5th Int. Conf. on Machine Learning and Cybernetics (Dalian, 2006), pp. 3282–3287.
P. Yang, Q. Liu, and D. N. Metaxas, “Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition,” in Proc. CVPR (Minneapolis, 2007).
M. Yeasin, B. Bullot, and R. Sharma, “Recognition of Facial Expressions and Measurement of Levels of Interest from Video,” IEEE Trans. Multimedia 8, 500–508 (2006).
Y. Zhang and Q. Li, “Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 699–714 (2005).
G. Zhao and M. Pietikainen, “Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 915–928 (2007).
H. Zhao, Z. Wang, and J. Men, “Facial Complex Expression Recognition Based on Fuzzy Kernel Clustering and Support Vector Machines,” in ICNS 07: Proc. 3rd Int. Conf. on Natural Computation (Haikou, 2007), pp. 562–566.
G. Zhou, Y. Zhan, and J. Zhang, “Facial Expression Recognition Based on Selective Feature Extraction, in ISDA 06: Proc. 6th Int. Conf. on Intelligent Systems Design and Application (Jinan, 2006), pp. 412–417.
Author information
Authors and Affiliations
Corresponding author
Additional information
The article is published in the original.
Nripen Alugupally completed his MS from the Department of Computer Science and Engineering at the University of Nebraska-Lincoln. He is currently working as a Software Engineer at Pacific Gas & Electric.
Sanjiv Bhatia is an Associate Professor in the Department of Computer Science and Mathematics at the University of Missouri at St. Louis. His research interests include algorithms, computer vision, and image analysis.
David Marx is a Professor in the Statistics Department at the University of Nebraska-Lincoln. His research interests include spatial statistics, linear and non-linear models, and biometrics. He has authored or co-authored over 150 papers.
Ashok Samal is a Professor in the Department of Computer Science and Engineering at the University of Nebraska-Lincoln. His research interests include image analysis including biometrics, document image analysis and computer vision applications. He has published extensively in these areas.
Rights and permissions
About this article
Cite this article
Alugupally, N., Samal, A., Marx, D. et al. Analysis of landmarks in recognition of face expressions. Pattern Recognit. Image Anal. 21, 681–693 (2011). https://doi.org/10.1134/S105466181104002X
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S105466181104002X