Skip to main content
Log in

Real-time Safety Monitoring Vision System for Linemen in Buckets Using Spatio-temporal Inference

  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

Linemen risk falls, electric shocks, burns, and other injuries during the daily job and these incidents can often be fatal. In this paper, we present a novel vision-based real-time system for detection and tracking of various non-rigid safety wearables worn by linemen, in a highly cluttered environment. We set up four imaging sensors on the repair truck’s bucket to robustly monitor the linemen from four different viewpoints. In the monitoring system, we firstly apply a novel fast background segmentation method to suppress false positives and reduce search space. Next, we represent each safety wearable with a Gaussian mixture model and track them with an LK-tracker. In order to track occluded or out-of-camera-view safety wearables, we propose a novel human pose inference method. The proposed method is an extension from the existing CNN-based human pose inference by utilizing light-weight color, shape, and space-based human pose inference mechanism. The proposed human pose inference method shows improved performance in terms of precision, recall, and speed. Experimental results on a number of challenging sequences demonstrate the effectiveness of the proposed scheme, under complex background, prolonged occlusions, and varying color, shape, and lighting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. U.S. Department of Health and Human Services, “Worker deaths by electrocution: a summary of NIOSH surveillance and investigative findings,” DHHS (NIOSH) PUBLICATION No. 98–131, May 1998. https://www.cdc.gov/niosh/docs/98-131/pdfs/98-131.pdf

  2. “Train-the-trainers guide to electrical safety for general industry — a review of common OSHA regulations and workplace violations,” The Workplace Safety Awareness Council, Florida, www.wpsac.org.

  3. J.-O. Seo, S-U. Han, S. H. Lee, and H. K. Kim, “Computer vision techniques for construction safety and health monitoring,” Advance Engineering Informatics, vol. 29, no. 2, pp. 239–251, April 2015.

    Article  Google Scholar 

  4. I. S. Kim, H. S. Choi, K. M. Yi, J. Y. Choi, and S. G. Kong, “Intelligent visual surveillance — a survey,” International Journal of Control, Automation, and Systems, vol. 8, no. 5, pp. 926–939, 2010.

    Article  Google Scholar 

  5. S.-U. Han, S. H. Lee, “A vision-based motion capture and recognition framework for behavior-based safety management,” Automation in Construction, vol. 35, pp. 131–141, 2013.

    Article  Google Scholar 

  6. A. Khosrowpour, J. C. Niebles, and M. G. Fard, “Vision-based workface assessment using depth images for activity analysis of interior construction operations,” Automation in Construction, vol. 48, pp. 74–87, 2014.

    Article  Google Scholar 

  7. M. Rafael, H. Andreasson, and A. J. Lilienthal, “A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery,” Sensors, vol. 14, no. 10, pp. 17952–17980, 2017.

    Google Scholar 

  8. S. Chi and C. H. Caldas, “Automated object identification using optical video cameras on construction sites,” Computer Aided Civil and Infrastructure Engineering, vol. 26, no. 5, pp. 368–380, July 2011.

    Article  Google Scholar 

  9. X. Yang, Y. Yu, H. Li, X. Lou, and F. Wang, “Motion-based analysis for construction workers using biomechanical methods,” Frontiers of Engineering Management, vol. 4, no. 1, pp. 84–91, 2017.

    Article  Google Scholar 

  10. L. Ding, W. Fang, H. Luo, P. E. D. Love, B. Zhong, and X. Ouyang, “A deep hybrid learning model to detect unsafe behavior: Integrating convolution neural networks and long short-term memory,” Automation in Construction, vol. 86, no. 1, pp. 118–124, 2018.

    Article  Google Scholar 

  11. J. Yang, O. Arif, P. A. Vela, J. Teizer, and Z. Shi, “Tracking multiple workers on construction sites using video cameras,” Advanced Engineering Informatics, vol. 24, no. 4, pp. 428–434, 2010.

    Article  Google Scholar 

  12. M.-W. Park and I. Brilakis, “Construction worker detection in video frames for initializing vision trackers,” Automation in Construction, vol. 28, pp. 15–25, 2012.

    Article  Google Scholar 

  13. M. Memarzadeh, M. G. Fard, and J. C. Niebles, “Automated 2D detection of construction equipment and workers from site video streams using histograms of oriented gradients and colors,” Automation in Construction, vol. 32, pp. 24–37, 2013.

    Article  Google Scholar 

  14. M. Neuhausen, J. Teizer, and M. Konig, “Construction worker detection and tracking in bird’s-eye view camera images,” Proc. of 35th Intern. Symp. Automation and Robotics in Construction, pp. 1159–1166, 2018.

  15. M.-W. Park, A. Makhmalbaf, and I. Brilakis, “Comparative study of vision tracking methods for tracking of construction site resources,” Automation in Construction, vol. 20, no. 7, pp. 905–915, 2011.

    Article  Google Scholar 

  16. R. Silva, K. Aires, T. Santos, K. Abdala, R. Veras, and A. Soares, “Automatic detection of motorcyclists without helmet,” Proc. of XXXIX Latin American Computing Conference, pp. 1–7, 2013.

  17. R. R. V. e Silva, K. R. T. Aires, and R. de M. S. Veras, “Helmet detection on motorcyclists using image descriptors and classifiers,” Proc. of 27th SIBGRAPI Conference on Graphics, Patterns and Images, pp. 141–148, 2014.

  18. J. Chiverton, “Helmet presence classification with motorcycle detection and tracking,” IET Intelligent Transport Systems, vol. 6, no. 3, pp. 259–269, September 2012.

    Article  MathSciNet  Google Scholar 

  19. Q. Fang, H. Li, X. Luo, L. Ding, H. Luo, T. M. Rose, and W. An, “Detecting non-hardhat-use by a deep learning method from far-field surveillance videos,” Automation in Construction, vol. 85, no. 1, pp. 1–9, 2018.

    Article  Google Scholar 

  20. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” NIPS, 2015.

  21. M. Peniak, “Real-time PPE monitoring on the Edge? [White paper],” GitHub, Retrieved 10 Nov. 2018, u]https://github.com/cortexica/intel-rrk-safety/blob/master/whitepaper/cortexica_whitepaper_realtime_ppe_monitoring_edge.pdf

  22. Y. J. Lee, J. Kim, and K. Grauman, “Key-segments for video object segmentation,” Proc. of ICCV, pp. 1995–2002, 2011.

  23. T. Ma and L. J. Latecki, “Maximum weight cliques with mutex constraints for video object segmentation,” Proc. of CVPR, pp. 670–677, 2012.

  24. J. O. Zhang and M. Shah, “Video object segmentation through spatially accurate and temporally dense extraction of primary object regions,” Proc. of CVPR, pp. 628–635, 2013.

  25. Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, “OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields,” arXiv preprint arXiv:1812.08008, 2018.

  26. H. Kang, S. H. Lee, and J. Lee, “HCI (Hue-chromaintensity) color model: a robust color representation under illumination changes,” International Journal of Control, Automation, and Systems, vol. 10, no. 5, pp. 963–971, 2012.

    Article  Google Scholar 

  27. T. Wang and J. Collomosse, “Probabilistic motion diffusion of labeling priors for coherent video segmentation,” IEEE Trans. Multimedia, vol. 14, no. 2, pp. 389–400, 2012.

    Article  Google Scholar 

  28. A. Benard, M. Gygli, “Interactive video object segmentation in the wild,” arXiv:1801.00269, 2018.

  29. M. Babaee, D-T. Dinh, and G. Rigoll, “A deep convolutional neural network for video sequence background subtraction,” Pattern Recognition, vol. 76, pp. 635–649, April 2018.

    Article  Google Scholar 

  30. Y. Wang, Z. Yu, and L. Zhu, “Foreground detection with deeply learned multi-scale spatial-temporal features,” Sensors, vol. 18, no. 12, pp. 4269, December 2018.

    Article  Google Scholar 

  31. Y. Yan, J. Ren, H. Zhao, G. Sun, Z. Wang, J. Zheng, S. Marshall, and J. Soraghan, “Cognitive fusion of thermal and visible imagery for effective detection and tracking of pedestrians in videos,” Cognitive Computation, vol. 10, no. 1, pp. 94–104, Feb. 2018.

    Article  Google Scholar 

  32. Y. Tian, A. Senior, and M. Lu, “Robust and efficient foreground analysis in complex surveillance videos,” Machine Vision and Application, vol. 23. no. 5, pp. 967–983, September 2012.

    Article  Google Scholar 

  33. Z. Zivkovic and F. van der Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction,” Pattern Recognition Letters, vol. 27, no. 7, pp. 773–780, May 2006.

    Article  Google Scholar 

  34. E. J. F-Sanchez, L. Rubio, J. Diaz, and E. Ros, “Background subtraction model based on color and depth cues,” Machine Vision and Application, vol. 25, no. 5, pp. 1211–1225, July 2014.

    Article  Google Scholar 

  35. T. Brox, A. Bruhn, and J. Weickert, “Variational motion segmentation with level sets,” ECCV, pp. 471–483, May 2006.

  36. P. Chockalingam, N. Pradeep, and S. Birchfield, “Adaptive fragments-based tracking of non-rigid objects using level sets,” Proc. of ICCV, pp. 1530–1537, September 2009.

  37. A. Yilmaz, X. Li, and M. Shah, “Contour-based object tracking with occlusion handling in video acquired using mobile cameras,” PAMI, vol. 26, no. 11, pp. 1531–1536, Nov. 2004.

    Article  Google Scholar 

  38. Z. Cao, T. Simon, S-E. Wei, and Y. Sheikh, “Realtime multi-person 2D pose estimation using Part Affinity Fields,” Proc. of CVPR, pp. 1302–1310, 2017.

  39. K. Pulli, A. Baksheev, K. Kornyakov, V. Eruhimov, “Realtime computer vision with OpenCV,” ACM queue, vol. 10, no. 4, pp. 1–17, April 2012.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Unsang Park.

Additional information

Recommended by Associate Editor Kang-Hyun Jo under the direction of Editor Euntai Kim.

This work was partly supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (2017-0-01772. Development of QA system for video story understanding to pass Video Turing Test) and Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (2017-0-01781. Data Collection and Automatic Tuning System Development for the Video Understanding).

Zahid Ali received his B.E. degree in Electronic Engineering from NED University in 2009 and his M.E. degree in Computer Engineering from Chosun University in 2013. He was awarded Global IT Scholarship by NIIED South Korea in 2013. He received a Ph.D. degree from Sogang University in 2020. He is currently working as a Post-doctoral researcher in Computer Vision and Image Processing lab. His research interest are at the intersection of computer vision and machine learning.

Unsang Park received his B.S. and M.S. degrees from the Department of Materials Engineering, Hanyang University, Korea, in 1998 and 2000, respectively. He received his M.S. and Ph.D. degrees from the Department of Computer Science and Engineering, Michigan State University, MI, in 2004 and 2009, respectively. He is currently an assistant professor in the Department of Computer Science and Engineering at Sogang University. His research interests include pattern recognition, image processing, computer vision, and machine learning.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ali, Z., Park, U. Real-time Safety Monitoring Vision System for Linemen in Buckets Using Spatio-temporal Inference. Int. J. Control Autom. Syst. 19, 505–520 (2021). https://doi.org/10.1007/s12555-019-0546-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-019-0546-y

Keywords

Navigation