Skip to main content

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 829))

  • 1504 Accesses

Abstract

Apex frame is the frame containing the highest intensity changes of facial movements in a sequence of video. It plays a crucial role in the analysis of micro-expressions, which generally have minute facial movements. This frame is hard to be identified that requires a laborious and time-consuming effort from highly skilled specialists. Therefore, a convolutional neural networks-based technique is proposed to automate apex frame detection using a novel continuous labeling scheme. The network is trained using ascending and descending labels according to the linear and exponential functions, pivoted on the ground truth apex frame. Two datasets, CASME II and SAMM databases are used to verify the proposed algorithm, where the apex frame is determined according to the maximum label intensity and a sliding window of the maximum label intensity. The results show that a linear continuous label with the sliding window approach produced the lowest average error of 14.37 frames.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Yan, W.-J., et al.: CASME II: an improved spontaneous micro-expression database and the baseline evaluation. PLoS ONE 9(1), e86041 (2014)

    Google Scholar 

  2. Yan, W.-J., Wu, Q., Liang, J., Chen, Y.H., Fu, X.: How fast are the leaked facial expressions: the duration of micro-expressions. J. Nonverbal Behavior. 37(4), 217–230 (2013)

    Article  Google Scholar 

  3. Liong, S.T., See, J., Wong, K.S., Phan, R.C.W.: Less is more: micro-expression recognition from video using apex frame. Sign. Process. Image Commun. 62, 82–92 (2018)

    Article  Google Scholar 

  4. Li, X., et al.: Towards reading hidden emotions: a comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Trans. Affect. Comput. 9(4), 563–577 (2018)

    Article  Google Scholar 

  5. Ma, H., An, G., Wu, S., Yang, F.: A region histogram of oriented optical flow (RHOOF) feature for apex frame spotting in micro-expression. In: 2017 International Symposium on Intelligent Signal Processing and Communication Systems, pp. 281–286, Xiamen (2017)

    Google Scholar 

  6. Yan, W.-J., Wang, S.-J., Chen, Y.-H., Zhao, G., Fu, X.: Quantifying micro-expressions with constraint local model and local binary pattern. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 296–305. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16178-5_20

    Chapter  Google Scholar 

  7. Liong, S.-T., See, J., Wong, K., Le Ngo, A.C., Oh, Y.-H., Phan, R.: Automatic apex frame spotting in micro-expression database. In: 2015 3rd IAPR Asian conference on pattern recognition (ACPR), pp. 665–669 (2015)

    Google Scholar 

  8. Zulkifley, M.A., Moran, B.: Enhancement of robust foreground detection through masked GreyWorld and color co-occurrence approach. In: 3rd IEEE International Conference on Computer Science and Information Technology, pp. 131–136 (2010)

    Google Scholar 

  9. Zulkifley, M.A., Rawlinson, D., Moran, B.: Robust observation detection for single object tracking: deterministic and probabilistic patch-based approaches. Sensors 12(11), 15638–15670 (2012)

    Article  Google Scholar 

  10. Gan, Y.S., Liong, S.T., Yau, W.C., Huang, Y.C., Tan, L.K.: OFF-ApexNet on micro-expression recognition system. Signa Process. Image Commun. 74, 129–139 (2019)

    Article  Google Scholar 

  11. Takalkar, M., Xu, M., Wu, Q., Chaczko, Z.: A survey: facial micro-expression recognition. Multimed. Tools Appl. 77(15), 19301–19325 (2017). https://doi.org/10.1007/s11042-017-5317-2

    Article  Google Scholar 

  12. Zhang, Z., Chen, T., Meng, H., Liu, G., Fu, X.: SMEConvNet: a convolutional neural network for spotting spontaneous facial micro-expression from long videos. IEEE Access 6, 71143–71151 (2018)

    Article  Google Scholar 

  13. Zulkifley, M.A., Abdani, S.R., Zulkifley, N.H.: Pterygium-net: a deep learning approach to pterygium detection and localization 78(24), 34563–34584 (2019)

    Google Scholar 

  14. Davison, A.K., Lansley, C., Costen, N., Tan, K., Yap, M.H.: SAMM: a spontaneous micro-facial movement dataset. IEEE Trans. Affect. Comput. 9(1), 116–129 (2018)

    Article  Google Scholar 

  15. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Represetations (ICLR) (2015)

    Google Scholar 

  16. Zulkifley, M.A., Mohamed, N.A., Zulkifley, N.H.: Squat angle assessment through tracking body movements. EEE Access 7, 48635–48644 (2019)

    Google Scholar 

  17. Abdani, S.R., Zulkifley, M.A.: DenseNet with spatial pyramid pooling for industrial oil palm plantation detection. In: International Conference on Mechatronics, Robotics and Systems Engineering, pp. 134–138 (2019)

    Google Scholar 

Download references

Acknowledgments

The authors would like to acknowledge funding from Universiti Kebangsaan Malaysia (Geran Universiti Penyelidikan: GUP-2019–008) and Ministry of Higher Education Malaysia (Fundamental Research Grant Scheme: FRGS/1/2019/ICT02/UKM/02/1).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohd Asyraf Zulkifley .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Min, K.S., Zulkifley, M.A., Yanikoglu, B., Kamari, N.A.M. (2022). Apex Frame Spotting Using Convolutional Neural Networks with Continuous Labeling. In: Mahyuddin, N.M., Mat Noor, N.R., Mat Sakim, H.A. (eds) Proceedings of the 11th International Conference on Robotics, Vision, Signal Processing and Power Applications. Lecture Notes in Electrical Engineering, vol 829. Springer, Singapore. https://doi.org/10.1007/978-981-16-8129-5_127

Download citation

Publish with us

Policies and ethics