skip to main content
research-article

Deep Ensemble Learning for Human Activity Recognition Using Wearable Sensors via Filter Activation

Published:29 October 2022Publication History
Skip Abstract Section

Abstract

During the past decade, human activity recognition (HAR) using wearable sensors has become a new research hot spot due to its extensive use in various application domains such as healthcare, fitness, smart homes, and eldercare. Deep neural networks, especially convolutional neural networks (CNNs), have gained a lot of attention in HAR scenario. Despite exceptional performance, CNNs with heavy overhead is not the best option for HAR task due to the limitation of computing resource on embedded devices. As far as we know, there are many invalid filters in CNN that contribute very little to output. Simply pruning these invalid filters could effectively accelerate CNNs, but it inevitably hurts performance. In this article, we first propose a novel CNN for HAR that uses filter activation. In comparison with filter pruning that is motivated for efficient consideration, filter activation aims to activate these invalid filters from an accuracy boosting perspective. We perform extensive experiments on several public HAR datasets, namely, UCI-HAR (UCI), OPPORTUNITY (OPPO), UniMiB-SHAR (Uni), PAMAP2 (PAM2), WISDM (WIS), and USC-HAD (USC), which show the superiority of the proposed method against existing state-of-the-art (SOTA) approaches. Ablation studies are conducted to analyze its internal mechanism. Finally, the inference speed and power consumption are evaluated on an embedded Raspberry Pi Model 3 B plus platform.

REFERENCES

  1. [1] Bhat Ganapati, Tuncel Yigit, An Sizhe, Lee Hyung Gyu, and Ogras Umit Y.. 2019. An ultra-low energy human activity recognition accelerator for wearable health applications. ACM Trans. Embed. Comput. Syst. 18, 5s (2019), 122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Li Bing, Cui Wei, Wang Wei, Zhang Le, Chen Zhenghua, and Wu Min. 2021. Two-stream convolution augmented transformer for human activity recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. 286293.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Zhang Yi, Yang Zheng, Zhang Guidong, Wu Chenshu, and Zhang Li. 2021. XGest: Enabling cross-label gesture recognition with RF signals. ACM Trans. Sensor Netw. 17, 4 (2021), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Akbari Ali, Martinez Jonathan, and Jafari Roozbeh. 2021. Facilitating human activity data annotation via context-aware change detection on smartwatches. ACM Trans. Embed. Comput. Syst. 20, 2 (2021), 120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Concone Federico, Re Giuseppe Lo, and Morana Marco. 2019. A fog-based application for human activity recognition using personal smart devices. ACM Trans. Internet Technol. 19, 2 (2019), 120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Huang Wenbo, Zhang Lei, Wu Hao, Min Fuhong, and Song Aiguo. 2022. Channel-Equalization-HAR: A light-weight convolutional neural network for wearable sensor based human activity recognition. IEEE Trans. Mob. Comput. Early access. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Cheng Xin, Zhang Lei, Tang Yin, Liu Yue, Wu Hao, and He Jun. 2022. Real-time human activity recognition using conditionally parametrized convolutions on mobile and wearable devices. IEEE Sensors J. 22, 6 (2022), 58895901.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Li Xinyu, He Yuan, Fioranelli Francesco, and Jing Xiaojun. 2021. Semisupervised human activity recognition with radar micro-Doppler signatures. IEEE Trans. Geosci. Rem. Sens. 60 (2021), 1–12.Google ScholarGoogle Scholar
  9. [9] Huang Wenbo, Zhang Lei, Gao Wenbin, Min Fuhong, and He Jun. 2021. Shallow convolutional neural networks for human activity recognition using wearable sensors. IEEE Trans. Instrum. Measur. 70 (2021), 111.Google ScholarGoogle Scholar
  10. [10] LeCun Yann, Bengio Yoshua, and Hinton Geoffrey. 2015. Deep learning. Nature 521, 7553 (2015), 436444.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Li Zewen, Liu Fan, Yang Wenjie, Peng Shouheng, and Zhou Jun. 2021. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. Early access. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Gu Jiuxiang, Wang Zhenhua, Kuen Jason, Ma Lianyang, Shahroudy Amir, Shuai Bing, Liu Ting, Wang Xingxing, Wang Gang, Cai Jianfei, et al. 2018. Recent advances in convolutional neural networks. Pattern Recog. 77 (2018), 354377.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Tian Yonglong, Lee Guang-He, He Hao, Hsu Chen-Yu, and Katabi Dina. 2018. RF-based fall monitoring using convolutional neural networks. Proc. ACM Interact., Mob., Wear. Ubiq. Technol. 2, 3 (2018), 124.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Zeng Ming, Gao Haoxiang, Yu Tong, Mengshoel Ole J., Langseth Helge, Lane Ian, and Liu Xiaobing. 2018. Understanding and improving recurrent networks for human activity recognition by continuous attention. In Proceedings of the ACM International Symposium on Wearable Computers. 5663.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Ronao Charissa Ann and Cho Sung-Bae. 2016. Human activity recognition with smartphone sensors using deep learning neural networks. Exp. Syst. Applic. 59 (2016), 235244.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Yang Jianbo, Nguyen Minh Nhut, San Phyo Phyo, Li Xiao Li, and Krishnaswamy Shonali. 2015. Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th International Joint Conference on Artificial Intelligence.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Ravi Daniele, Wong Charence, Lo Benny, and Yang Guang-Zhong. 2016. Deep learning for human activity recognition: A resource efficient implementation on low-power devices. In Proceedings of the IEEE 13th International Conference on Wearable and Implantable Body Sensor Networks (BSN). IEEE, 7176.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Jiang Wenchao and Yin Zhaozheng. 2015. Human activity recognition using wearable sensors by deep convolutional neural networks. In Proceedings of the 23rd ACM International Conference on Multimedia. 13071310.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Kim Eunji. 2020. Interpretable and accurate convolutional neural networks for human activity recognition. IEEE Trans. Industr. Inform. 16, 11 (2020), 71907198.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Ignatov Andrey. 2018. Real-time human activity recognition from accelerometer data using convolutional neural networks. Appl. Soft Comput. 62 (2018), 915922.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Gao Zan, Xuan Hai-Zhen, Zhang Hua, Wan Shaohua, and Choo Kim-Kwang Raymond. 2019. Adaptive fusion and category-level dictionary learning model for multiview human action recognition. IEEE Internet Things J. 6, 6 (2019), 92809293.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Ordóñez Francisco Javier and Roggen Daniel. 2016. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 115.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Li Hao, Kadav Asim, Durdanovic Igor, Samet Hanan, and Graf Hans Peter. 2016. Pruning filters for efficient convNets. arXiv preprint arXiv:1608.08710 (2016).Google ScholarGoogle Scholar
  24. [24] Han Song, Mao Huizi, and Dally William J.. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149 (2015).Google ScholarGoogle Scholar
  25. [25] Wang Dong, Zhou Lei, Zhang Xueni, Bai Xiao, and Zhou Jun. 2018. Exploring linear relationship in feature map subspace for convNets compression. arXiv preprint arXiv:1803.05729 (2018).Google ScholarGoogle Scholar
  26. [26] Cao Jingjing, Li Wenfeng, Ma Congcong, and Tao Zhiwen. 2018. Optimizing multi-sensor deployment via ensemble pruning for wearable activity recognition. Inf. Fusion 41 (2018), 6879.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Yeom Seul-Ki, Seegerer Philipp, Lapuschkin Sebastian, Binder Alexander, Wiedemann Simon, Müller Klaus-Robert, and Samek Wojciech. 2021. Pruning by explaining: A novel criterion for deep neural network pruning. Pattern Recog. 115 (2021), 107899.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Chen Zhenghua, Jiang Chaoyang, and Xie Lihua. 2018. A novel ensemble ELM for human activity recognition using smartphone sensors. IEEE Trans. Industr. Inform. 15, 5 (2018), 26912699.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Zhuo Huiyuan, Qian Xuelin, Fu Yanwei, Yang Heng, and Xue Xiangyang. 2018. SCSP: Spectral clustering filter pruning with soft self-adaption manners. arXiv preprint arXiv:1806.05320 (2018).Google ScholarGoogle Scholar
  30. [30] Meng Fanxu, Cheng Hao, Li Ke, Xu Zhixin, Ji Rongrong, Sun Xing, and Lu Guangming. 2020. Filter grafting for deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 65996607.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Murahari Vishvak S. and Plötz Thomas. 2018. On attention models for human activity recognition. In Proceedings of the ACM International Symposium on Wearable Computers. 100103.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Wang Kun, He Jun, and Zhang Lei. 2019. Attention-based convolutional neural network for weakly labeled human activities’ recognition with wearable sensors. IEEE Sensors J. 19, 17 (2019), 75987604.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Ma Haojie, Li Wenzhong, Zhang Xiao, Gao Songcheng, and Lu Sanglu. 2019. AttnSense: Multi-level attention mechanism for multimodal human activity recognition. In Proceedings of the International Joint Conferences on Artificial Intelligence. 31093115.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Zhang Ying, Xiang Tao, Hospedales Timothy M., and Lu Huchuan. 2018. Deep mutual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 43204328.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Straczkiewicz Marcin, James Peter, and Onnela Jukka-Pekka. 2021. A systematic review of smartphone-based human activity recognition methods for health research. Nat. Partn. J.: Digit. Med. 4, 1 (2021), 115.Google ScholarGoogle Scholar
  36. [36] Guan Yu and Plötz Thomas. 2017. Ensembles of deep LSTM learners for activity recognition using wearables. Proc. ACM Interact., Mob., Wear. Ubiq 1, 2 (2017), 128.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Anguita Davide, Ghio Alessandro, Oneto Luca, Parra Xavier, and Reyes-Ortiz Jorge L.. 2012. Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In Proceedings of the International Workshop on Ambient Assisted Living. Springer, 216223.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Roggen Daniel, Calatroni Alberto, Rossi Mirco, Holleczek Thomas, Förster Kilian, Tröster Gerhard, Lukowicz Paul, Bannach David, Pirkl Gerald, Ferscha Alois, et al. 2010. Collecting complex activity datasets in highly rich networked sensor environments. In Proceedings of the 7th International Conference on Networked Sensing Systems (INSS). IEEE, 233240.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Micucci Daniela, Mobilio Marco, and Napoletano Paolo. 2017. UniMiB SHAR: A dataset for human activity recognition using acceleration data from smartphones. Appl. Sci. 7, 10 (2017), 1101.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Reiss Attila and Stricker Didier. 2012. Introducing a new benchmarked dataset for activity monitoring. In Proceedings of the 16th International Symposium on Wearable Computers. IEEE, 108109.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Lockhart Jeffrey W., Weiss Gary M., Xue Jack C., Gallagher Shaun T., Grosner Andrew B., and Pulickal Tony T.. 2011. Design considerations for the WISDM smart phone-based sensor mining architecture. In Proceedings of the 5th International Workshop on Knowledge Discovery from Sensor Data. 2533.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Zhang Mi and Sawchuk Alexander A.. 2012. USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceedings of the ACM Conference on Ubiquitous Computing. 10361043.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Khan Zanobya N. and Ahmad Jamil. 2021. Attention induced multi-head convolutional neural network for human activity recognition. Appl. Soft Comput. 110 (2021), 107671.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Teng Qi, Wang Kun, Zhang Lei, and He Jun. 2020. The layer-wise training convolutional neural networks using local loss for sensor-based human activity recognition. IEEE Sensors J. 20, 13 (2020), 72657274.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Abedin Alireza, Ehsanpour Mahsa, Shi Qinfeng, Rezatofighi Hamid, and Ranasinghe Damith C.. 2021. Attend and discriminate: Beyond the state-of-the-art for human activity recognition using wearable sensors. Proc. ACM Interact., Mob., Wear. Ubiq 5, 1 (2021), 122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Wang Kun, He Jun, and Zhang Lei. 2021. Sequential weakly labeled multiactivity localization and recognition on wearable sensors using recurrent attention networks. IEEE Trans. Hum.-Mach. Syst. 51, 4 (2021), 355–364.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Qian Hangwei, Pan Sinno Jialin, and Miao Chunyan. 2021. Latent independent excitation for generalizable sensor-based cross-person activity recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 1192111929.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Al-qaness Mohammed A. A., Dahou Abdelghani, Elaziz Mohamed Abd, and Helmi A. M.. 2022. Multi-ResAtt: Multilevel residual network with attention for human activity recognition using wearable sensors. IEEE Trans. Industr. Inform. Early access. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Xia Songpengcheng, Chu Lei, Pei Ling, Zhang Zixuan, Yu Wenxian, and Qiu Robert C.. 2021. Learning disentangled representation for mixed-reality human activity recognition with a single IMU sensor. IEEE Trans. Instrum. Measur. 70 (2021), 114.Google ScholarGoogle Scholar
  50. [50] Lima Wesllen Sousa, Bragança Hendrio L. S., and Souto Eduardo J. P.. 2021. NOHAR-NOvelty discrete data stream for human activity recognition based on smartphones with inertial sensors. Exp. Syst. Applic. 166 (2021), 114093.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Alawneh Luay, Mohsen Belal, Al-Zinati Mohammad, Shatnawi Ahmed, and Al-Ayyoub Mahmoud. 2020. A comparison of unidirectional and bidirectional LSTM networks for human activity recognition. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Zhang Min. 2019. Gait activity authentication using LSTM neural networks with smartphone sensors. In Proceedings of the 15th International Conference on Mobile Ad-Hoc and Sensor Networks (MSN). IEEE, 456461.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Sun Xiaojie, Xu Hongji, Dong Zheng, Shi Leixin, Liu Qiang, Li Juan, Li Tiankuo, Fan Shidi, and Wang Yuhao. 2022. CapsGaNet: Deep neural network based on capsule and GRU for human activity recognition. IEEE Syst. J. Early access. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Li Chenglin, Niu Di, Jiang Bei, Zuo Xiao, and Yang Jianming. 2021. Meta-HAR: Federated representation learning for human activity recognition. In Proceedings of the International World Wide Web Conferences (WWW). ACM, 912922.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Bi Haixia, Perello-Nieto Miquel, Santos-Rodriguez Raul, and Flach Peter. 2020. Human activity recognition based on dynamic active learning. IEEE J. Biomed. Health Inform. 25, 4 (2020), 922934.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Singh Satya P., Sharma Madan Kumar, Lay-Ekuakille Aimé, Gangwar Deepak, and Gupta Sukrit. 2020. Deep ConvLSTM with self-attention for human activity decoding using wearable sensors. IEEE Sensors J. 21, 6 (2020), 85758582.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Deldari Shohreh, Smith Daniel V., Xue Hao, and Salim Flora D.. 2021. Time series change point detection with self-supervised contrastive predictive coding. In Proceedings of the Web Conference. ACM, 31243135.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Deep Ensemble Learning for Human Activity Recognition Using Wearable Sensors via Filter Activation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Embedded Computing Systems
      ACM Transactions on Embedded Computing Systems  Volume 22, Issue 1
      January 2023
      512 pages
      ISSN:1539-9087
      EISSN:1558-3465
      DOI:10.1145/3567467
      • Editor:
      • Tulika Mitra
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 29 October 2022
      • Online AM: 26 July 2022
      • Accepted: 22 July 2022
      • Revised: 23 June 2022
      • Received: 10 January 2022
      Published in tecs Volume 22, Issue 1

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format