Skip to main content

Key Frame Extraction and Classification of Human Activities Using Motion Energy

  • Conference paper
  • First Online:
Advances in Computational Intelligence Systems (UKCI 2018)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 840))

Included in the following conference series:

Abstract

One of the imminent challenges for assistive robots in learning human activities while observing a human perform a task is how to define movement representations (states). This has been recently explored for improved solutions. This paper proposes a method of extracting key frames (or poses) of human activities from skeleton joint coordinates information obtained using an RGB-D Camera (Depth Sensor). The motion energy (kinetic energy) of each pose in an activity sequence is computed and a novel approach is proposed for extracting key pose locations that define an activity using moving average crossovers of computed pose kinetic energy. This is important as not all frames of an activity sequence are key in defining the activity. In order to evaluate the reliability of extracted key poses, Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) which is capable to learn a sequence of transition from states in an activity is applied in classifying activities from identified key poses. This is important for assistive robots to identify key human poses and states transition in order to correctly carry out human activities. Some preliminary experimental results are presented to illustrate the proposed methodology.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adama, D.A., Lotfi, A., Langensiepen, C., Lee, K.: Human activities transfer learning for assistive robotics, pp. 253–264. Springer, Cardiff (2018)

    Google Scholar 

  2. Adama, D.A., Lotfi, A., Langensiepen, C., Lee, K., Trindade, P.: Learning human activities for assisted living robotics. In: Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, Island of Rhodes, Greece, PETRA 2017, pp. 286–292. ACM (2017)

    Google Scholar 

  3. Bemelmans, R., Gelderblom, G.J., Jonker, P., de Witte, L.: Socially assistive robots in elderly care: a systematic review into effects and effectiveness. J. Am. Med. Dir. Assoc. 13(2), 114–120.e1 (2012)

    Google Scholar 

  4. Chaaraoui, A.A., Padilla-López, J.R., Flórez-Revuelta, F.: Fusion of skeletal and silhouette-based features for human action recognition with RGB-D devices. In: Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, ICCVW 2013, pp. 91–97 (2013)

    Google Scholar 

  5. Espingardeiro, A.: Social assistive robots, reframing the human robotics interaction benchmark of social success. Int. J. Soc. Behav. Educ. Econ. Bus. Ind. Eng. 9(1), 377–382 (2015)

    Google Scholar 

  6. Gaglio, S., Re, G.L., Morana, M.: Human activity recognition process using 3-D posture data. IEEE Trans. Hum. Mach. Syst. 45(5), 586–597 (2015)

    Article  Google Scholar 

  7. Gupta, R., Chia, A.Y.-S., Rajan, D.: Human activities recognition using depth images. In: Proceedings of the 21st ACM International Conference on Multimedia, pp. 283–292 (2013)

    Google Scholar 

  8. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  9. Koskinopoulou, M., Piperakis, S., Trahanias, P.: Learning from demonstration facilitates human-robot collaborative task execution. In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 59–66 (2016)

    Google Scholar 

  10. Lipton, Z.C., Kale, D.C., Elkan, C., Wetzel, R.C.: Learning to diagnose with LSTM recurrent neural networks. CoRR abs/1511.03677 (2015)

    Google Scholar 

  11. Nez, J.C., Cabido, R., Pantrigo, J.J., Montemayor, A.S., Vlez, J.F.: Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition. Pattern Recognit. 76, 80–94 (2018)

    Article  Google Scholar 

  12. Nunes, U.M., Faria, D.R., Peixoto, P.: A human activity recognition framework using max-min features and key poses with differential evolution random forests classifier. Pattern Recognit. Lett. 99, 21–31 (2017)

    Article  Google Scholar 

  13. Shan, J., Akella, S.: 3D human action segmentation and recognition using pose kinetic energy. In: 2014 IEEE International Workshop on Advanced Robotics and its Social Impacts, pp. 69–75, September 2014

    Google Scholar 

  14. Zhu, G., Zhang, L., Shen, P., Song, J., Zhi, L., Yi, K.: Human action recognition using key poses and atomic motions. In: 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1209–1214 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Ada Adama .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Adama, D.A., Lotfi, A., Langensiepen, C. (2019). Key Frame Extraction and Classification of Human Activities Using Motion Energy. In: Lotfi, A., Bouchachia, H., Gegov, A., Langensiepen, C., McGinnity, M. (eds) Advances in Computational Intelligence Systems. UKCI 2018. Advances in Intelligent Systems and Computing, vol 840. Springer, Cham. https://doi.org/10.1007/978-3-319-97982-3_25

Download citation

Publish with us

Policies and ethics