Skip to main content

Advertisement

Log in

Deep Temporal Conv-LSTM for Activity Recognition

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Human activity recognition has gained interest from the research community due to the advancements in sensor technology and the improved machine learning algorithm. Wearable sensors have become more ubiquitous, and most of the wearable sensor data contain rich temporal structural information that describes the distinct underlying patterns and relationships of various activity types. The nature of those activities is typically sequential, with each subsequent activity window being the result of the preceding activity window. However, the state-of-the-art methods usually model the temporal characteristic of the sensor data and ignore the relationship of the sliding window. This research proposes a novel deep temporal Conv-LSTM architecture to enhance activity recognition performance by utilizing both temporal characteristics from sensor data and the relationship of sliding windows. The proposed architecture is evaluated based on the dataset consisting of transition activities—Smartphone-Based Recognition of Human Activities and Postural Transitions dataset. The proposed hybrid architecture with parallel features learning pipelines has demonstrated the ability to model the temporal relationship of the activity windows where the transition of activities is captured accurately. Besides that, the size of sliding windows is studied, and it has shown that the selection of window size is affecting the accuracy of the activity recognition. The proposed deep temporal Conv-LSTM architecture can achieve an accuracy score of 0.916, which outperformed the state-of-the-art accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data Availability

Not applicable.

Code Availability

Not applicable/

References

  1. Abidine BM, Fergani L, Fergani B, Oussalah M (2018) The joint use of sequence features combination and modified weighted SVM for improving daily activity recognition. Pattern Anal Appl 21:119–138. https://doi.org/10.1007/s10044-016-0570-y

    Article  MathSciNet  Google Scholar 

  2. Tian Y, Zhang J, Wang J et al (2020) Robust human activity recognition using single accelerometer via wavelet energy spectrum features and ensemble feature selection. Syst Sci Control Eng 8:83–96. https://doi.org/10.1080/21642583.2020.1723142

    Article  Google Scholar 

  3. Vanrell SR, Milone DH, Rufiner HL et al (2018) Assessment of homomorphic analysis for human activity recognition from acceleration signals. IEEE J Biomed Health Inform 22:1001–1010. https://doi.org/10.1109/JBHI.2017.2722870

    Article  Google Scholar 

  4. Ertuǧrul ÖF, Kaya Y (2017) Determining the optimal number of body-worn sensors for human activity recognition. Soft Comput 21:5053–5060. https://doi.org/10.1007/s00500-016-2100-7

    Article  Google Scholar 

  5. Kanjilal R, Uysal I (2021) The future of human activity recognition: deep learning or feature engineering? Neural Process Lett 53:561–579. https://doi.org/10.1007/s11063-020-10400-x

    Article  Google Scholar 

  6. Wang J, Chen Y, Hao S et al (2019) Deep learning for sensor-based activity recognition: a survey. Pattern Recognit Lett 119:3–11. https://doi.org/10.1016/j.patrec.2018.02.010

    Article  Google Scholar 

  7. Xu W, Pang Y, Yang Y, Liu Y (2018) Human activity recognition based on convolutional neural network. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp 165–170

  8. Bevilacqua A, MacDonald K, Rangarej A et al (2019) Human activity recognition with convolutional neural networks. In: Brefeld U, Curry E, Daly E et al (eds) Machine learning and knowledge discovery in databases. Springer, Cham, pp 541–552

    Chapter  Google Scholar 

  9. Lawal IA, Bano S (2019) Deep human activity recognition using wearable sensors. In: Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. Association for Computing Machinery, New York, pp 45–48

  10. Gil-Martín M, San-Segundo R, Fernández-Martínez F, Ferreiros-López J (2021) Time analysis in human activity recognition. Neural Process Lett 53:4507–4525. https://doi.org/10.1007/s11063-021-10611-w

    Article  Google Scholar 

  11. Zhu R, Xiao Z, Cheng M et al (2018) Deep ensemble learning for human activity recognition using smartphone. In: 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), pp 1–5

  12. Zehra N, Azeem SH, Farhan M (2021) Human activity recognition through ensemble learning of multiple convolutional neural networks. In: 2021 55th annual Conference on Information Sciences and Systems (CISS), pp 1–5

  13. Sikder N, Chowdhury MdS, Arif ASM, Nahid A-A (2019) Human activity recognition using multichannel convolutional neural network. In: 2019 5th International Conference on Advances in Electrical Engineering (ICAEE), pp 560–565

  14. Zhang H, Xiao Z, Wang J et al (2020) A novel IoT-perceptive Human Activity Recognition (HAR) approach using multihead convolutional attention. IEEE Internet Things J 7:1072–1080. https://doi.org/10.1109/JIOT.2019.2949715

    Article  Google Scholar 

  15. Chen Y, Zhong K, Zhang J et al (2016) LSTM networks for mobile human activity recognition. Atlantis Press, pp 50–53

    Google Scholar 

  16. Zebin T, Sperrin M, Peek N, Casson AJ (2018) Human activity recognition from inertial sensor time-series using batch normalized deep LSTM recurrent networks. In: 2018 40th annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 1–4

  17. Guan Y, Plötz T (2017) Ensembles of deep LSTM learners for activity recognition using wearables. Proc ACM Interact Mob Wearable Ubiquitous Technol 1:1–28. https://doi.org/10.1145/3090076

    Article  Google Scholar 

  18. Li S, Li C, Li W et al (2018) Smartphone-sensors based activity recognition using IndRNN. In: Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. Association for Computing Machinery, New York, pp 1541–1547

  19. Mahmud T, Akash SS, Fattah SA et al (2020) Human activity recognition from multi-modal wearable sensor data using deep multi-stage LSTM architecture based on temporal feature aggregation. In: 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), pp 249–252

  20. Ordóñez FJ, Roggen D (2016) Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors. https://doi.org/10.3390/s16010115

    Article  Google Scholar 

  21. Mekruksavanich S, Jitpattanakul A (2020) Smartwatch-based human activity recognition using hybrid LSTM network. In: 2020 IEEE SENSORS, pp 1–4

  22. Mutegeki R, Han DS (2020) A CNN-LSTM approach to human activity recognition. In: 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), pp 362–366

  23. Li Z, Liu Y, Guo X, Zhang J (2020) Multi-convLSTM neural network for sensor-based human activity recognition. J Phys Conf Ser 1682:012062. https://doi.org/10.1088/1742-6596/1682/1/012062

    Article  Google Scholar 

  24. Wang H, Zhao J, Li J et al (2020) Wearable sensor-based human activity recognition using hybrid deep learning techniques. Secur Commun Netw 2020:2132138. https://doi.org/10.1155/2020/2132138

    Article  Google Scholar 

  25. Singh SP, Sharma MK, Lay-Ekuakille A et al (2021) Deep ConvLSTM with self-attention for human activity decoding using wearable sensors. IEEE Sens J 21:8575–8582. https://doi.org/10.1109/JSEN.2020.3045135

    Article  Google Scholar 

  26. Abdel-Basset M, Hawash H, Chakrabortty RK et al (2021) ST-DeepHAR: deep learning model for human activity recognition in IoHT applications. IEEE Internet Things J 8:4969–4979. https://doi.org/10.1109/JIOT.2020.3033430

    Article  Google Scholar 

  27. Xia K, Huang J, Wang H (2020) LSTM-CNN architecture for human activity recognition. IEEE Access 8:56855–56866. https://doi.org/10.1109/ACCESS.2020.2982225

    Article  Google Scholar 

  28. Nafea O, Abdul W, Muhammad G, Alsulaiman M (2021) Sensor-based human activity recognition with spatio-temporal deep learning. Sensors. https://doi.org/10.3390/s21062141

    Article  Google Scholar 

  29. Xiao Z, Xu X, Xing H et al (2021) A federated learning system with enhanced feature extraction for human activity recognition. Knowl -Based Syst 229:107338. https://doi.org/10.1016/j.knosys.2021.107338

    Article  Google Scholar 

  30. Gao W, Zhang L, Teng Q et al (2021) DanHAR: dual attention network for multimodal human activity recognition using wearable sensors. Appl Soft Comput 111:107728. https://doi.org/10.1016/j.asoc.2021.107728

    Article  Google Scholar 

  31. Reyes-Ortiz J-L, Oneto L, Sama A et al (2016) Transition-aware human activity recognition using smartphones. Neurocomputing 171:754–767

    Article  Google Scholar 

  32. Janidarmian M, Roshan Fekr A, Radecka K, Zilic Z (2017) A Comprehensive analysis on wearable acceleration sensors in human activity recognition. Sensors. https://doi.org/10.3390/s17030529

    Article  Google Scholar 

  33. Banos O, Galvez J-M, Damas M et al (2014) Window size impact in human activity recognition. Sensors 14:6474–6499. https://doi.org/10.3390/s140406474

    Article  Google Scholar 

  34. Hogg RV, Tanis EA, Zimmerman DL (2010) Probability and statistical inference. Prentice Hall, Upper Saddle River

    Google Scholar 

Download references

Funding

This work has been supported in part by the Ministry of Higher Education Malaysia for Fundamental Research Grant Scheme with Project Code: FRGS/1/2019/ICT02/USM/02/1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohd Halim Mohd Noor.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethics Approval

Not applicable.

Consent to Participate

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Window size: 80

 

A1

A2

A3

A4

A5

A6

A7

A8

A9

A10

A11

A12

A1

890

23

1

0

0

0

4

2

0

0

1

2

A2

31

859

2

1

2

0

3

6

0

0

2

3

A3

4

38

818

0

0

0

2

0

0

0

1

1

A4

0

0

0

918

148

0

2

3

0

3

1

0

A5

4

1

0

150

964

2

2

1

0

0

1

2

A6

0

0

0

0

0

1143

0

1

0

0

1

2

A7

0

0

0

13

14

0

40

1

1

3

5

1

A8

0

0

0

6

16

0

1

36

0

1

0

2

A9

0

0

0

5

3

17

0

2

37

1

31

0

A10

0

0

0

14

0

8

1

1

0

50

0

6

A11

0

0

0

6

2

16

0

0

18

2

42

0

A12

3

0

0

4

7

11

0

6

0

25

0

29

Window size: 100

 

A1

A2

A3

A4

A5

A6

A7

A8

A9

A10

A11

A12

A1

702

34

0

0

0

0

1

1

0

0

1

1

A2

19

696

13

0

0

0

1

0

0

0

1

0

A3

3

5

676

0

0

0

0

0

0

0

0

3

A4

0

0

0

710

139

0

2

0

0

4

2

0

A5

1

1

0

93

800

1

0

1

0

1

4

1

A6

0

0

0

0

0

918

0

0

0

0

2

1

A7

0

1

0

6

10

0

22

1

1

4

15

1

A8

0

1

0

5

11

0

1

33

0

0

0

1

A9

0

0

0

3

1

9

0

2

36

0

24

1

A10

0

0

0

12

0

6

0

0

0

35

1

8

A11

0

1

0

4

1

13

0

1

17

2

30

0

A12

1

2

0

2

3

7

0

2

0

22

0

28

Window size: 120

 

A1

A2

A3

A4

A5

A6

A7

A8

A9

A10

A11

A12

A1

600

2

1

0

1

0

0

1

0

0

1

10

A2

31

562

5

0

1

0

0

2

0

0

0

0

A3

3

11

567

0

0

0

0

0

0

0

1

0

A4

0

0

0

606

99

0

4

1

0

3

0

0

A5

0

1

0

43

697

0

4

3

0

0

3

1

A6

0

0

0

0

0

762

0

0

4

1

1

1

A7

0

0

0

7

12

0

26

1

0

1

5

0

A8

0

0

0

4

7

0

5

24

0

0

0

0

A9

0

0

0

5

3

11

0

0

24

0

21

0

A10

0

0

0

8

0

3

3

0

1

33

0

7

A11

0

0

0

4

2

6

4

0

13

0

28

0

A12

1

0

0

5

3

4

0

2

0

16

0

22

Window size: 140

 

A1

A2

A3

A4

A5

A6

A7

A8

A9

A10

A11

A12

A1

499

21

3

0

0

0

2

1

0

0

0

4

A2

12

484

10

0

1

0

0

10

0

0

0

5

A3

0

5

481

0

0

0

1

0

0

0

0

1

A4

0

0

0

503

98

0

2

3

1

3

0

1

A5

0

0

0

77

552

0

4

8

0

1

3

0

A6

0

0

0

0

0

653

0

0

1

1

0

3

A7

0

0

0

5

12

0

25

0

0

0

1

0

A8

0

0

0

1

3

0

0

28

0

2

0

0

A9

0

0

0

3

0

5

1

2

21

0

23

0

A10

0

0

0

9

0

3

1

0

0

29

0

6

A11

0

0

0

3

1

7

4

0

8

0

28

0

A12

0

0

0

1

2

4

0

0

0

19

0

21

Window size: 128

 

A1

A2

A3

A4

A5

A6

A7

A8

A9

A10

A11

A12

A1

525

49

4

0

0

0

0

1

0

0

0

1

A2

15

446

6

0

0

0

0

1

0

0

0

0

A3

0

10

526

0

0

0

0

0

0

0

1

0

A4

0

0

0

594

68

0

1

4

0

1

0

0

A5

1

1

0

21

674

2

1

2

0

1

5

0

A6

0

0

0

0

0

713

0

0

0

1

1

4

A7

0

0

0

11

6

0

27

2

0

1

2

0

A8

1

0

0

2

5

0

0

32

0

0

0

0

A9

0

0

0

5

1

7

0

0

20

0

25

1

A10

0

0

0

9

2

2

0

0

0

28

0

7

A11

0

0

0

1

3

6

0

1

5

0

39

0

A12

1

0

0

2

2

2

0

2

0

17

0

25

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohd Noor, M.H., Tan, S.Y. & Ab Wahab, M.N. Deep Temporal Conv-LSTM for Activity Recognition. Neural Process Lett 54, 4027–4049 (2022). https://doi.org/10.1007/s11063-022-10799-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-022-10799-5

Keywords

Navigation