Skip Navigation Links
Journal of Vibration Testing and System Dynamics

C. Steve Suh (editor), Pawel Olejnik (editor),

Xianguo Tuo (editor)

Pawel Olejnik (editor)

Lodz University of Technology, Poland

Email: pawel.olejnik@p.lodz.pl

C. Steve Suh (editor)

Texas A&M University, USA

Email: ssuh@tamu.edu

Xiangguo Tuo (editor)

Sichuan University of Science and Engineering, China

Email: tuoxianguo@suse.edu.cn


Wear Detection Method of Electric Power Field Safety Appliances based on Deep Learning

Journal of Vibration Testing and System Dynamics 8(1) (2024) 67--76 | DOI:10.5890/JVTSD.2024.03.005

Zhu Shi, Hao Wu, Zhong-yang Jin, Hong Song

School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin, China

Download Full Text PDF

 

Abstract

Aiming at the problem of low detection accuracy of small and medium targets and occluded targets in power field, a new wear detection method improved SBC\_YOLOv5 is proposed. This method is based on expansion convolution, and uses multi-scale pooling operation and MHSA self attention module to improve the spatial pyramid pooling layer, so that it can obtain more abundant receptive fields. Secondly, it uses feature bridging operation and CARAFE operator to improve the Neck network, so as to improve the network's ability to extract and compensate semantic features. Finally, it uses CIoU to optimize the network's loss function and improve the regression ability of the model. The experimental results show that the wear detection method SBC established in this paper\_ The average accuracy mAP(Mean Average Precision) of YOLOv5 is 82.3\%, the recall rate is 81.5\%, and the detection speed FPS is 44. Compared with original YOLOv5, YOLOv4, and Faster RCNN, the mAP values of YOLOv5 are increased by 1.5\%, 10.27\%, and 25.21\%, respectively. It can effectively improve the detection accuracy of small targets and occluded targets, and meet the real-time and accuracy requirements of wear detection at power operation sites.

References

  1. [1]  Chi, Z.F. (2021), Analysis of the problems and countermeasures in the current construction safety work, Management and Science and Technology of Small and Medium sized Enterprises (Late Edition), (11), 158-160.
  2. [2]  He, G.L. (2021), Research and Application of Substation Security Violation Identification Algorithm based on Video Image, Zhejiang University.
  3. [3]  Zhang, M.Y., Cao, Z.Y., Zhao, X.F., et al (2019), Research on recognition of safety helmet wearing by construction workers based on in-depth learning, Journal of Safety and Environment, 19(02), 535-541.
  4. [4]  Liu, J. and Xie, Y.H. (2019), Implementation of improved YOLO algorithm in intelligent video monitoring system, Information Technology and Network Security, 38(04), 102-106.
  5. [5]  Shi, H., Chen, X.Q., and Yang, Y. (2019), Improve the safety helmet wearing detection method of YOLO v3, Computer Engineering and Application, 55(11), 213-220.
  6. [6]  Zheng, X., Wang, S.Q., Zhang, W.C., et al (2021), Safety helmet supervision system based on deep learning, Computer System Application, 30(11), 118-126.
  7. [7]  Han, Y., Zhang, J.J., Sun, H., et al (2016), Design and implementation of intelligent safety inspection system for construction workers based on image recognition. Science and Technology of Work Safety in China, 12(10), 142-148.
  8. [8]  Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014), Rich feature hierarchies for accurate object detection and semantic; segmentation, IEEE Conference on Computer Vision and Pattern Recognition, 580-587.
  9. [9]  Girshick, R. (2015), Fast RCNN, IEEE International Conference on Computer Vision, 1440-1448.
  10. [10]  Ren, S., He, K., Girshick, R., and Sun, J. (2015), Faster RCNN: Towards real-time object detection with region proposal net-works, Advances in Neural Information Processing Systems (NIPS), 91-99.
  11. [11]  Xu, S.K., Wang, Y.R., Gu, Y.W., et al (2020), Research on helmet wearing detection based on improved Faster RCNN, Computer Application Research, 37(03), 901-905.
  12. [12]  Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016), SSD: Single shot multibox detector, European Conference on Computer Visio, 21-37.
  13. [13]  Redmon, J. and Farhadi, A. (2017), YOL09000: Better, faster, stronger, IEEE Conference on Computer Vision and Pattern Recognition, 7263-7271.
  14. [14] Lin, T.Y., Goyal, P., Girshick, R., He, K., and Dollár, P., (2020), Focal lose for dense object detection, IEEE Transactions on Pattern Analysis $\&$ Machine Intelligence, 42(2), 318-327.
  15. [15]  Redmon, J. and Farhadi, A. (2018), Yolov3: An incremental improvement, arXiv preprint arXiv:1804.02767.
  16. [16]  Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2020), Yolov4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004.10934.
  17. [17]  Redmon, J., Divvala, S., Girshick, R., et al (2016), You only look once: Unified, real-time object detection, IEEE conference on computer vision and pattern recognition, 779-788.
  18. [18]  Xiao, T.G., Cai, L.C., Gao, X., et al (2019), Improving the safety helmet wearing detection method of YOLOv3. Computer Engineering and Application, 1-13.
  19. [19]  Zeiler, M.D., Taylor, G.W., and Fergus, R. (2011), Adaptive deconvolutional networks for mid and high level feature learning, The 2011 International Conference on Computer Vision, Barcelona, Spain, 6-13.
  20. [20]  Xu, T., Tang, G.J., Liu, Q.P., et al (2020), Improved YOLOv3 algorithm based on cavity convolution and focal loss, Journal of Nanjing University of Posts and Telecommunications (Natural Science Edition), 40(06), 100-108.
  21. [21]  Vaswani, A., Shazeer, N., Parmar, N., et al (2017), Attention is all you need, The 31st International Conference on Neural Information Processing Systems Long Beach, Cambridge: The MIT Press, 6000-6010.
  22. [22]  Liu, S., Qi, L., Qin, H., et al (2018), Path aggregation network for instance segmentation, IEEE Conference on Computer Vision and Pattern Recognition, 8759-8768.
  23. [23]  Tan, M., Pang, R., and Le, Q.V. (2020), Efficientdet: Scalable and efficient object detection, IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10781-10790.
  24. [24]  Wang, J., Chen, K., Xu, R., et al (2019), Carafe: Content-aware reassembly of features, IEEE/CVF International Conference on Computer Vision, 3007-3016.
  25. [25]  Xiang, H.Q., Cui, W.C., Liu, S.Z., et al (2021), Small target detection based on high representational ability feature processing module. Computer Engineering and Design, 42(05), 1360-1367.
  26. [26]  Liu, D., Wang, H.L., Zeng, H.W., et al (2021), Research on VoVNet FCOS road pedestrian target detection algorithm, Foreign Electronic Measurement Technology, 40(11), 64-71.