- 无标题文档
查看论文信息

论文中文题名:

 基于MicroNet的井下人员异常姿态识别算法研究    

姓名:

 刘祥    

学号:

 20307223002    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 085400    

学科名称:

 工学 - 电子信息    

学生类型:

 硕士    

学位级别:

 工程硕士    

学位年度:

 2023    

培养单位:

 西安科技大学    

院系:

 通信与信息工程学院    

专业:

 电子与通信工程    

研究方向:

 计算机视觉    

第一导师姓名:

 韩晓冰    

第一导师单位:

 西安科技大学    

论文提交日期:

 2023-06-15    

论文答辩日期:

 2023-05-30    

论文外文题名:

 Research on abnormal posture recognition algorithm of underground miners based on MicroNet    

论文中文关键词:

 MicroNet ; OpenPose ; 轻量化网络 ; 姿态识别    

论文外文关键词:

 MicroNet ; OpenPose ; Lightweight network ; Posture recognition    

论文中文摘要:

煤矿意外事故的发生往往会威胁井下工作人员的生命安全,对井下工作人员进行检测跟踪以及异常姿态的识别尤为重要。目前常用的矿井人员定位管理系统虽然能够实现对井下人员的高精度定位与跟踪,但是对于意外事故导致的人员跌倒或倒地不起等异常姿态无法进行及时准确的检测与识别,这将影响后续的紧急救援工作。为此,本文设计的基于MicroNet的异常姿态识别算法可对井下人员的姿态进行实时准确的检测与识别。

本文在 分析了 OpenPose关键节点检测算法 的原理及网络结构 的基础上,对于关键节点检测算法的网络结构复杂,训练模型耗时较长的问题,通过引入轻量化网络的方法来降低其网络结构的复杂性。在仿真分析关键节点检测LightWeightOpenPose算法的基础上将其特征提取骨干网络改为更加轻量化的MicroNet网络,利用COCO2017数据集对改进前后的关键节点检测算法进行仿真分析,改进后的仿真结果表明基于MicroNet的关键节点检测算法经由37小时训练出的模型大小为47M,在IoU=0.5时mAP为64.7%。

对于传统异常姿态识别算法中对于不同姿态难以设定判断阈值,在进行姿态识别时容易发生误判和漏判,且计算人体骨关节运动时的信息耗时过长,无法保证算法实时性,本文通过引入MicroNet设计异常姿态识别网络,利用关键节点检测算法绘制的人体骨架图建立数据集,训练异常姿态识别模型,并采集实际视频数据来验证该模型的性能;实验结果表明,基于MicroNet的异常姿态识别方法,仅需要10M左右的模型,检测速度可达0.02秒每张图片,并且识别准确率可达90%左右。

仿真结果表明基于MicroNet的关键节点检测算法相较于LightWeightOpenPose算法,其模型训练时间加快了47%,模型大小下降了44%,而mAP仅下降了1.1%,可见在准确率下降较小的同时,加快了模型的训练速度且模型大小有大幅下降,证明了MicroNet网络结构对关键节点检测算法的轻量化改进具有明显的效果;同时设计的异常姿态识别网络对矿井环境中人员异常姿态识别能够进行实时处理且识别平均准确率为88%;整体算法对井下人员异常姿态识别能达到较好的效果,对姿态识别的应用有一定的参考价值。

论文外文摘要:

The occurrence of accidents in coal mines often threatens the life safety of underground miners. It is particularly important to detect and track underground miners and identify abnormal postures. At present, although the commonly used mine personnel positioning management system can achieve high-precision positioning and tracking of underground personnel, it cannot detect and identify abnormal postures such as personnel falls or falls caused by accidents in a timely and accurate manner, which will affect the subsequent emergency rescue work. Therefore, the abnormal posture recognition algorithm based on MicroNet designed in this thesis can detect and recognize the posture of underground personnel in real time and accurately.

Based on the analysis of the principle and network structure of OpenPose key node detection algorithm, this thesis introduces the method of lightweight network to reduce the complexity of its network structure for the problem that the network structure of the key node detection algorithm is complex and the training model takes a long time. Based on the simulation analysis of the key node detection LightWeightOpenPose algorithm, the feature extraction backbone network is changed to a more lightweight MicroNet network. The COCO2017 data set is used to simulate and analyze the key node detection algorithm before and after the improvement. The improved simulation results show that the key node detection algorithm based on MicroNet has a model size of 47 M after 37 hours of training, and mAP is 64.7 % when IoU = 0.5.

In the traditional abnormal posture recognition algorithm, it is difficult to set the judgment threshold for different postures. It is prone to misjudgment and missed judgment when performing posture recognition, and it takes too long to calculate the information of human bone and joint movement, which cannot guarantee the real-time performance of the algorithm. In this thesis, MicroNet is introduced to design the abnormal posture recognition network. The human skeleton map drawn by the key node detection algorithm is used to establish the data set, train the abnormal posture recognition model, and collect the actual video data to verify the performance of the model. The experimental results show that the abnormal posture recognition method based on MicroNet only needs about 10M model, the detection speed can reach 0.02 seconds per picture, and the recognition accuracy can reach about 90 %.

The simulation results show that the key node detection algorithm based on MicroNet is compared with the LightWeightOpenPose algorithm, the model training time is accelerated by 47 %, the model size is reduced by 44 %, and the mAP is only reduced by 1.1 %. It can be seen that while the accuracy rate decreases slightly, the training speed of the model is accelerated and the model size is greatly reduced, which proves that the MicroNet network structure has a significant effect on the lightweight improvement of the key node detection algorithm; the abnormal posture recognition network designed at the same time can process the abnormal posture recognition of personnel in the mine environment in real time and the average recognition accuracy is 88 %. The overall algorithm can achieve better results for the abnormal posture recognition of underground personnel, and has certain reference value for the application of posture recognition.

参考文献:

[1] 王勇.煤矿井下人员视频图像识别跟踪的研究与应用[J].电子测量技术,2020,43(01):28-31.

[2] 张玉涛,张梦凡,史学强,等.基于深度学习的井下运动目标跟踪算法研究[J].煤炭工程,2022,54(10):151-155.

[3] Johansson G. Visual perception of biological motion and a model for its analysis[J]. Perception & psychophysics, 1973, 14: 201-211.

[4] Zhang Z. Microsoft kinect sensor and its effect[J]. IEEE multimedia, 2012, 19(2): 4-10.

[5] Cao Z, Simon T, Wei S E, et al. Realtime multi-person 2d pose estimation using part affinity fields[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7291-7299.

[6] Yang N, Zhao J. Dangerous Driving Behavior Recognition Based on Improved YoloV5 and Openpose[J]. IAENG International Journal of Computer Science, 2022, 49(4).

[7] Lina W, Ding J. Behavior detection method of OpenPose combined with Yolo network[C]//2020 International Conference on Communications, Information System and Computer Engineering (CISCE). IEEE, 2020: 326-330.

[8] Gao M, Li J, Zhou D, et al. Fall detection based on OpenPose and MobileNetV2 network[J]. IET Image Processing, 2022, 17(3): 722-732.

[9] Badave H, Kuber M. Evaluation of person recognition accuracy based on openpose parameters[C]//2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS). IEEE, 2021: 635-640.

[10] Emanuel A W R, Mudjihartono P, Nugraha J A M. Snapshot-based human action recognition using OpenPose and deep learning[J]. Snapshot-Based Human Action Recognition using OpenPose and Deep Learning, 2021, 48(4): 2-8.

[11] Kawamata T, Akakura T. Automatic Evaluation of Learning Behaviors for Online Lectures by OpenPose[C]//2022 IEEE 4th Global Conference on Life Sciences and Technologies (LifeTech). IEEE, 2022: 384-385.

[12] Wu E Q, Tang Z R, Xiong P, et al. ROpenPose: a rapider OpenPose model for astronaut operation attitude detection[J]. IEEE Transactions on Industrial Electronics, 2021,69(1): 1043-1052.

[13] Kareem I, Ali S F, Sheharyar A. Using skeleton based optimized residual neural network architecture of deep learning for human fall detection[C]//2020 IEEE 23rd International Multitopic Conference (INMIC). IEEE, 2020: 1-5.

[14] 梅阳,王永雄,秦琪,等. 一种基于关键帧的人体行为识别方法[J]. 光学技术, 2017, 43(4): 323-328.

[15] Chenghao H, Jiayi M, Yongyong Y. Research on human posture recognition system based on inertial sensor[C]//2022 IEEE 5th International Conference on Automation, Electronics and Electrical Engineering (AUTEEE). IEEE, 2022: 52-56.

[16] Wu H, Wu X, Huang K, et al. Human pose recognition based on openpose and application in safety detection of intelligent factory[J]. 2021: 11-15.

[17] Fang C, Xiang H, Leng C, et al. Research on real-time detection of safety harness wearing of workshop personnel based on YOLOv5 and OpenPose[J]. Sustainability, 2022, 14(10).

[18] Zhao X. Research on the application of OpenPose in escalator safety systems[C]//Journal of Physics: Conference Series. IOP Publishing, 2022, 2258(1).

[19] Zhu B, Xie Y, Guohu L, et al. An abnormal behavior detection method using optical flow model and OpenPose[J]. International Journal of Advanced Computer Science and Applications, 2020, 11(5).

[20] Mai W, Wu F, Guo Z, et al. A Fall Detection Alert System Based on Lightweight Openpose and Spatial-Temporal Graph Convolution Network[C]//Journal of Physics: Conference Series. IOP Publishing, 2021, 2035(1).

[21] 李超. 基于卷积神经网络的人体行为分析与步态识别研究[D]. [博士学位论文].浙江大学, 2019.

[22] 李杰. 基于穿戴式传感器的人体动作识别算法研究[D]. [硕士学位论文].哈尔滨工业大学, 2017.

[23] Hayama T, Arakawa R. Walking-posture Classification from Single-acceleration-sensor Data using Deep Learning[C]//2020 9th International Congress on Advanced Applied Informatics (IIAI-AAI). IEEE, 2020: 400-403.

[24] Caviedes J E, Li B, Jammula V C. Wearable sensor array design for spine posture monitoring during exercise incorporating biofeedback[J]. IEEE Transactions on Biomedical Engineering, 2020, 67(10): 2828-2838.

[25] Li W, Lu W, Sha X, et al. Wearable gait recognition systems based on MEMS pressure and inertial sensors: A review[J]. IEEE Sensors Journal, 2021, 22(2): 1092-1104.

[26] Richoz S, Wang L, Birch P, et al. Transportation mode recognition fusing wearable motion, sound, and vision sensors[J]. IEEE Sensors Journal, 2020, 20(16): 9314-9328.

[27] Zhang Y, Zhang Z, Zhang Y, et al. Human activity recognition based on motion sensor using u-net[J]. IEEE Access, 2019, 7: 75213-75226.

[28] 陶杰,夏成宇.基于激光传感器的高精度运动姿态实时识别[J].激光杂志,2021,42(04):76-80.

[29] 邓平,吴明辉.基于机器学习的人体运动姿态识别方法[J].中国惯性技术学报,2022,30(01):37-43.

[30] 田立勇,马春莹.基于LINS/激光传感器的掘进机姿态识别系统[J].传感器与微系统,2022,41(05):91-94.

[31] 范书瑞,贾雅亭,刘晶花.基于三轴加速度传感器人体姿态识别的特征选择[J].应用科学学报,2019,37(03):427-436.

[32] Rylo M N, de Medeiros R L P, de Lucena Jr V F. Gesture recognition of wrist motion based on wearables sensors[J]. Procedia Computer Science, 2022, 210: 181-188.

[33] 邵延华, 张铎, 楚红雨, 等. 基于深度学习的 YOLO 目标检测综述[J]. 电 子 与 信 息 学 报, 2022, 44(10):3697-3708.

[34] 章程军, 胡晓兵, 牛洪超. 基于改进YOLOv5的车辆目标检测研究[J].四川大学学报(自然科学版),2022,59(05):79-87.

[35] 蒋超, 张豪, 章恩泽,等. 基于改进YOLOv5s的行人车辆目标检测算法[J].扬州大学学报(自然科学版),2022,25(06):45-49.

[36] Zhou M, Zhu J, Li X. Safety helmet detection system of smart construction site based on YOLOv5S[C]//2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP). IEEE, 2022: 1223-1228.

[37] Song Q, Liu Z, Jiang H. Coal gangue detection method based on improved YOLOv5[C]//2022 3rd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). IEEE, 2022: 466-469.

[38] 武历展, 王夏黎, 张倩,等. 基于优化YOLOv5s的跌倒人物目标检测方法[J].图学学报,2022,43(05):791-802.

[39] 毛雨晴, 赵奎. 基于改进YOLOv5的多任务安全人头检测算法[J].计算机工程,2022,48(08):136-143.

[40] 李一凡, 袁龙健, 王瑞. 基于OpenPose改进的轻量化人体动作识别模型[J].电子测量技术,2022,45(01):89-95.

[41] Konrad S G, Masson F R. Pedestrian Skeleton Tracking Using OpenPose and Probabilistic Filtering[C]//2020 IEEE Congreso Bienal de Argentina (ARGENCON). IEEE, 2020: 1-7.

[42] 杨君, 张素君, 张创豪,等. 基于OpenPose的人体动作识别对比研究[J].传感器与微系统,2021,40(01):5-8.

[43] Li Y, Chen Y, Dai X, et al. Micronet: Improving image recognition with extremely low flops[C]//Proceedings of the IEEE/CVF International conference on computer vision.IEEE, 2021: 468-477.

[44] Zhang S, Zhou X. MicroNet: Realizing micro neural network via binarizing GhostNet[C]//2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP). IEEE, 2021: 1340-1343.

[45] 宋贺良, 郑毅, 王克强. 可穿戴式人体姿态识别系统研究进展[J].激光与红外,2021,51(09):1123-1128.

[46] 盛洋,王健庆. 基于计算机视觉的人体姿态识别研究[J].现代信息科技,2022,6(16):87-91-95.

[47] 邓平, 吴明辉. 基于机器学习的人体运动姿态识别方法[J]. 中国惯性技术学报, 2022, 30(1): 37-43.

[48] 汪检兵, 李俊. 基于OpenPose-slim模型的人体骨骼关键点检测方法[J].计算机应用,2019,39(12):3503-3509.

[49] 卢海燕, 赵红东, 王添盟,等. 基于轻量级卷积神经网络的红外行人行为识别[J].传感器与微系统,2022,41(09):129-135.

[50] 朱建宝, 许志龙, 孙玉玮,等. 基于OpenPose人体姿态识别的变电站危险行为检测[J].自动化与仪表,2020,35(02):47-51.

[51] 黄凯文, 凌六一, 王成军,等. 基于改进YOLO和DeepSORT的实时多目标跟踪算法[J].电子测量技术,2022,45(06):7-13.

[52] 李永上,马荣贵,张美月.改进YOLOv5s+DeepSORT的监控视频车流量统计[J].计算机工程与应用,2022,58(05):271-279.

[53] 张旭辉, 闫建星, 张超,等. 基于改进YOLOv5s+DeepSORT的煤块行为异常识别[J].工矿自动化,2022,48(06):77-86.

[54] 颜宏文, 万俊杰, 潘志敏,等. 基于改进Yolov5-Lite轻量级的配电组件缺陷识别[J/OL].高电压技术,2022,4: 1-10.

[55] 刘羽璇, 赵玉良. 基于YOLOv5-Lite的小型无人机识别方法[J].自动化应用,2022(07):71-74.

[56] Osokin D. Real-time 2d multi-person pose estimation on cpu: Lightweight openpose[J]. arXiv preprint arXiv:1811.12004, 2018.

[57] Howard A G, Zhu M, Chen B, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications[J]. arXiv preprint arXiv:1704.04861, 2017.

[58] 刘春宏, 王松, 王赋攀,等. 基于改进MobileNet网络的AR辅助手语字母识别方法[J/OL].系统仿真学报:1-14[2023-04-25].

[59] 马皖宜, 张德平. 基于多尺度双注意力的人体姿态估计方法研究[J]. 计算机科学, 2022, 49:399-403.

[60] 王璇, 吴佳奇, 阳康,等. 煤矿井下人体姿态检测方法[J]. 工矿自动化, 2022, 48(5): 79-84.

[61] 黄铎, 应娜, 蔡哲栋. 基于强化学习的多人姿态检测算法优化[J]. 计算机应用与软件, 2019, 36(4): 186r191.

中图分类号:

 TP391.4    

开放日期:

 2023-06-16    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式