论文中文题名: |
基于深度学习的井下胶带运输机运动监测方法研究
|
姓名: |
窦晓熠
|
学号: |
18208088022
|
保密级别: |
公开
|
论文语种: |
chi
|
学科代码: |
083500
|
学科名称: |
工学 - 软件工程
|
学生类型: |
硕士
|
学位级别: |
工学硕士
|
学位年度: |
2021
|
培养单位: |
西安科技大学
|
院系: |
计算机科学与技术学院
|
专业: |
软件工程
|
研究方向: |
计算机视觉
|
第一导师姓名: |
付燕
|
第一导师单位: |
西安科技大学
|
论文提交日期: |
2021-06-21
|
论文答辩日期: |
2021-06-03
|
论文外文题名: |
Research on the Motion Monitoring method of Underground Belt Conveyor Based on Deep Learning
|
论文中文关键词: |
运动监测 ; 井下胶带运输机 ; 轻量级网络 ; 双重注意力 ; 消失点 ; 视觉测速
|
论文外文关键词: |
Motion monitoring ; Underground belt conveyor ; Lightweight network ; Dual attention ; Vanishing point ; Visual velocity measurement
|
论文中文摘要: |
︿
~随着智能监控系统在煤矿安全生产中发挥的作用越来越大,对监控视频中井下胶带运输机的运动状态监测已成为主要研究方向之一。对胶带运输机状态的智能化监测,不仅可以为井下视频监控工作提供更有价值的信息,也能从中预警异常事故的发生,为安全生产提供保障。因此,准确地对胶带运输机的状态实现实时监测尤为重要。本文结合煤矿井下监控视频特征,研究了基于深度学习的胶带运输机运动状态监测方法。
(1)为更准确和实时的检测煤块,本文提出了一种融合轻量级网络和双重注意力的煤块检测方法。该方法首先基于融合自动曝光矫正和自适应对比均衡的图像预处理方法增强煤矿井下视频图像质量;其次,利用MobileNet轻量级网络结合空间金字塔池化有效提取煤块特征,保证算法的实时性;最后将路径聚合网络PANet与双重注意力机制CBAM结合加强网络对煤块的敏感度,提高算法的准确性。在胶带运输机数据集上的实验结果表明:本文给出的预处理方法增强后的图像用于提出的煤块检测算法中,检测准确度提高了8.89%,煤块检测方法与YOLOv4对比,检测速度提高了19帧/s,准确率提高了14.37%。
(2)为实现对胶带运输机实时速度的计算,本文提出了一种结合深度学习的视觉测速方法。该方法首先利用本文提出的深度学习目标检测算法作为检测器,利用Kalman滤波作为跟踪器,对视频帧中的煤块实现实时跟踪;其次,通过计算图像中的两个消失点,根据二维三维坐标系的矩阵转换关系建立计算煤块移动速度的模型,从而监测到胶带运输机的实时速度。在胶带运输机监控视频上的实验结果表明:所提的基于深度学习的目标跟踪方法可以有效跟踪煤块,速度测量结果与实际值相比误差在0.7m/s内。
(3)总的来说,本文方法不仅改善了算法对胶带运输机上煤块的检测识别效果,同时通过对煤块的跟踪和速度测量有效地监测了胶带运输机是否出现空载或作业异常的运动状态,实现了对胶带运输机运动状态的智能化监测。
﹀
|
论文外文摘要: |
︿
~As the intelligent monitoring system plays a more and more important role in coal mine safety production, the motion state monitoring of underground belt conveyor in the monitoring video has become one of the main research directions. The intelligent monitoring of the belt conveyor can not only provide more valuable information for the underground video monitoring work, but also give early warning of the occurrence of abnormal accidents and provide guarantee for the safety production. Therefore, it is very important to accurately monitor the status of belt conveyor in real time. Combined with the characteristics of underground monitoring video, this paper studies the motion state monitoring method of belt conveyor based on deep learning.
(1) In order to detect coal more accurately and in real time, a coal detection method combining lightweight network and dual attention is proposed in this paper. Firstly, the image preprocessing method combining automatic exposure correction and adaptive contrast equalization is used to enhance the quality of underground video images. Secondly, MobileNet lightweight network combined with spatial pyramid pooling is used to extract coal features effectively to ensure the real-time performance of the algorithm. Finally, the path aggregation network PANet was combined with the dual attention mechanism CBAM to enhance the sensitivity of the network to the coal and improve the accuracy of the algorithm. The experimental results on the data set of the belt conveyor show that the enhanced image of the preprocessing method presented in this paper is used in the proposed coal detection algorithm, and the detection accuracy is improved by 8.89%. Compared with the coal detection method and Yolov4, the detection speed is improved by 19 frames /s, and the accuracy is improved by 14.37%.
(2) In order to realize the calculation of the real time speed of the belt conveyor, a visual speed measurement method combined with deep learning is proposed in this paper. Firstly, the deep learning target detection algorithm proposed in this paper is used as the detector, and the Kalman filter is used as the tracker to realize real-time tracking of coal in video frames. Secondly, by calculating the two vanishing points in the image, the model of calculating the moving velocity of coal block is established according to the matrix transformation relationship of two-dimensional and three-dimensional coordinate system, so as to monitor the real-time velocity of the belt conveyor. The experimental results on the surveillance video of the belt conveyor show that the proposed target tracking based on deep learning can effectively track coal blocks, and the error of the velocity measurement results is within 0.7m/s compared with the actual value.
(3) In general, the method in this paper not only improves the detection and recognition effect of the algorithm on the coal block on the belt conveyor, but also effectively monitors whether the belt conveyor appears no load or abnormal movement state through the tracking and velocity measurement of the coal block, thus realizing the intelligent monitoring of the movement state of the belt conveyor.
﹀
|
参考文献: |
︿
[1] 刘年平.煤矿安全生产风险预警研究[D].重庆:重庆大学,2012. [2] 董文会.多摄像机监控网络中的目标连续跟踪方法研究[D].济南:山东大学,2015. [3] 张立亚.基于图像识别的煤矿井下安全管控技术[J].煤矿安全,2021,52(2):165-168. [4] Le M H, Woo B S, Jo K H. A Comparison of SIFT and Harris conner features for correspondence points matching[C]//2011 17th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV). Ulsan, Korea (South): IEEE, 2011: 1-4. [5] Zhang Y, Li B, Lu H, et al. Sample-Specific SVM Learning for Person Re-identification[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016: 1278-1287. [6] Girshick R, Donahue J, Darrell T, et al. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Columbus, OH, USA: IEEE, 2014: 580-587. [7] He K, Zhang X, Ren S, et al. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition[J]. IEEE Transactions on Pattern Analysis &Machine Intelligence, 2014, 37(9): 1904-1916. [8] Girshick R. Fast R-CNN[C]//2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, IEEE, 2015: 1440-1448. [9] Ren S, He K, Girshick R, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis &Machine Intelligence, 2017, 39(6): 1137-1149. [10] Gidaris S, Komodakis N. Object Detection via a Multi-region and Semantic Segmentation-Aware CNN Model[C]//2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile: IEEE, 2015: 1134-1142. [11] Lin T Y, Dollár, Piotr, Girshick R, et al. Feature Pyramid Networks for Object Detection[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE, 2017: 936-944. [12] He K, Gkioxari G, Dollár P, et al. Mask R-CNN[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 386-397. [13] Cai Z, Vasconcelos N. Cascade R-CNN: Delving into High Quality Object Detection[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA: IEEE, 2018: 6154-6162 [14] Redmon J, Divvala S, Girshick R, et al. You Only Look Once: Unified, Real-Time Object Detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 779-788. [15] REDM ON J, FARHADI A. YOLO9000: better, faster, stronger [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 6517-6525. [16] Liu W, Anguelov D, Erhan D, et al. SSD: Single Shot MultiBox Detector[C]//ECCV 2016 European Conference on Computer Vision, The Netherlands: Springer, 2016: 21-37. [17] Redmon J, Farhadi A. YOLOv3: An Incremental Improvement[EB/OL]. arXiv: 1804.02767, 2018. [18] Bochkovskiy A, Wang CY, Liao HYM. YOLOv4: Optimal Speed and Accuracy of Object Detection[EB/OL]. arXiv:2004.10934. 2020. [19] Liu S, Qi L, Qin H, et al. Path Aggregation Network for Instance Segmentation [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake, UT, USA: IEEE Press, 2018: 8759-8768. [20] Welch G, Bishop G. An Introduction to the Kalman Filter[D]. Chapel Hill, NC, USA: University of North Carolina at Chapel Hill, 2001. [21] Nummiaro K, Koller-Meier E, Van Gool L. An Adaptive Color-based Particle filter[J]. Image and Vision Computing,2003, 21(1): 99−110. [22] Du K, Ju Y F, Jin Y L, et al. Object tracking based on improved MeanShift and SIFT[C]//Consumer Electronics, Communications and Networks. Yichang, China: IEEE,2012. 2716−2719. [23] Exner D, Bruns E, Kurz D, et al. Fast and robust CAMShift tracking[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA: IEEE,2010. 9−16. [24] Kwolek B. CamShift-Based Tracking in Joint Color-Spatial Spaces[C]//Computer Analysis of Images and Patterns. Berlin, Heidelberg Germany: Springer, 2005: 693−700. [25] 王鑫,唐振民.一种改进的基于Camshift的粒子滤波实时目标跟踪算法[J].中国图象图形学报,2010,15(10):1507-1514. [26] Huang S L, Hong J X. Moving object tracking system based on Camshift and Kalman filter[C]//2011 International Conference on Consumer Electronics, Communications and Networks (CECNet), Xianning, China: IEEE, 2011: 1423−1426. [27] Yuan G W, Zhang J X, Han Y H, et al. A Multiple Objects Tracking Method Based on a Combination of Camshift and Object Trajectory Tracking[C]//Advances in Swarm and Computational Intelligence. Beijing, China: Springer, 2015: 155−163. [28] Wenhua G, Zuren F, Xiaodong R. Object Tracking Using Local Multiple Features and a Posterior Probability Measure[J]. Sensors, 2017, 17(4):739. [29] Bolme D S, Beveridge J R, Draper B A, et al. Visual object tracking using adaptive correlation filters[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA: IEEE, 2010: 2544−2550. [30] Henriques J F, Caseiro R, Martins P, et al. Exploiting the Circulant Structure of Tracking-by-Detection with Kernels[C]//2012 European Conference on Computer Vision, Berlin, Heidelberg: pringer, 2012: 702-715. [31] Henriques J F, Caseiro R, Martins P, et al. High-Speed Tracking with Kernelized Correlation Filters[C]//IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015: 583-596. [32] Li Y, Zhu J. A Scale Adaptive Kernel Correlation Filter Tracker with Feature Integration[C]//European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014: 254-265. [33] Liu T, Wang G, Yang Q, et al. Real-time part-based visual tracking via adaptive correlation filters[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015: 4902-4912. [34] F. Li, C. Tian, W. Zuo, et al. Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA: IEEE, 2018: 4904-4913. [35] Galoogahi H K, Fagg A, Lucey S. Learning Background-Aware Correlation Filters for Visual Tracking[C]//2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017: 1144-1152. [36] Wang N, Yeung D Y. Learning a Deep Compact Image Representation for Visual Tracking[C]//26th International Conference on Neural Information Processing Systems(NIPS). Red Hook, NY, USA: Curran Associates Inc, 2013: 809-817. [37] Ma C, Huang J B, Yang X K, et al. Hierarchical convolutional features for visual tracking[C]//2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015: 3074-3082. [38] Danelljan M, Robinson A, Khan F S, et al. Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking [C]//14th European Conference on Computer Vision. Amsterdam, Netherlands: Springer, 2016: 472-488. [39] Danelljan M, Bhat G, Khan F S, et al. ECO: Efficient Convolution Operators for Tracking[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 6931-6939. [40] A. Bewley, Z. Ge, L. Ott, et al. Simple Online and Realtime Tracking[C]//2016 IEEE International Conference on Image Processing (ICIP). Phoenix, AZ, USA: IEEE, 2016: 3464-3468. [41] N Wojke, A Bewley, D Paulus. Simple Online and Realtime Tracking with a Deep Association Metric[EB/OL]. arXiv: 1703.07402, 2017. [42] 张天慈,丁萌,钱小燕,等.面向低视角场面监视的移动目标速度测量[J].北京航空航天大学学报,2020,46(2):266-273. [43] 徐伟,王朔中.基于视频图像Harris角点检测的车辆测速[J].中国图象图形学报,2006,11(11):1650-1652. [44] Lan J, Li J, Hu G, et al. Vehicle speed measurement based on gray constraint optical flow algorithm[J]. Optik-International Journal for Light and Electron Optics, 2014, 125(1): 289-295. [45] 巴全科,傅成华,艾茜苴,等.基于单目视觉测量运动物体速度的研究[J].计算机科学,2017,44(11A):273-275. [46] 王少慧,曹艳华,朱珍民.基于视频的实时血液流速计算方法[J].计算机应用研究,2017,34(7):2216-2220. [47] Schoepflin T N, Dailey D J. Algorithms for Calibrating Roadside Traffic Cameras and Estimating Mean Vehicle Speed[C]//2007 IEEE Intelligent Transportation Systems Conference. New York: IEEE, 2007: 277-283. [48] 张润初,杜倩云,俞祝良,等.一种利用参考图象与路面信息的道路监控摄像机标定方法[J].公路交通科技,2014,31(11):137-141. [49] Dubska M, Herout A, Juranek R, et al. Fully Automatic Roadside Camera Calibration for Traffic Surveillance[J]. IEEE Transactions on Intelligent Transportation Systems, 2015, 16(3): 1162-1171. [50] 王明明.矿用防爆摄像机让“井下”环境更安全[J].中国安防,2014(6):92-94. [51] 冯卫兵,胡俊梅,曹根牛.基于改进的简化脉冲耦合神经网络的煤矿井下图像去噪方法[J].工矿自动化,2014,40(5):54-58. [52] 刘毅,贾旭芬,田子建.一种基于同态滤波原理的井下光照不均图像处理方法[J].工矿自动化,2013,39(1):9-12. [53] 范伟强,刘毅.基于自适应小波变换的煤矿降质图像模糊增强算法[J].煤炭学报,2020,45(12):4248-4260. [54] 刘伟力,乔铁柱.矿用输送带纵向撕裂检测系统研究[J].工矿自动化,2017,43(2):78-81. [55] 李占利,陈佳迎,李洪安,等.胶带输送机智能视频监测与预警方法[J].图学学报,2017,38(2):230-235. [56] 贾英新,靳晔,王勇,等.基于视觉检测带式输送机输送带纠偏系统[J].煤矿机械,2019,40(9):62-64. [57] 韩涛,黄友锐,张立志,等.基于图像识别的带式输送机输煤量和跑偏检测方法[J].工矿自动化,2020,46(4):17-22. [58] 杨文娟,张旭辉,马宏伟,等.悬臂式掘进机机身及截割头位姿视觉测量系统研究[J].煤炭科学技术,2019,47(6):50-57. [59] 王渊,李红卫,郭卫,等.基于图像识别的液压支架护帮板收回状态监测方法[J].工矿自动化,2019,45(2):47-53. [60] 罗响,袁艳斌,王德永,等.煤矿视频中复杂行为识别的持续学习模型探究[J].金属矿山,2020(10):118-123. [61] C. Wang, H. Mark Liao, Y. Wu, et al. CSPNet: A New Backbone that can Enhance Learning Capability of CNN[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle, WA, USA: IEEE Press, 2020: 1571.1580. [62] 张晓斌.皮带运输机常遇故障和处理方法探析[J].内蒙古石油化工,2020,46(12):74-75. [63] Howard AG, Zhu M, Chen B, et al. MobileNets efficient convolutional neural networks for mobile vision applications[EB/OL]. arXiv: 1704.04861. 2017. [64] Woo S, Park J, Lee JY, et al. CBAM: Convolutional Block Attention Module[C]//Proceedings of 2018 ECCV Conference on Computer Vision, Berlin, Germany: Springer, 2018: 3-19. [65] 张晓慧.视频监控运动目标检测算法的研究与实现[D].西安:西安电子科技大学,2012. [66] 谢爱玲.井下运动目标跟踪及预警方法研究[D].西安:西安科技大学,2015. [67] Zhang Q, Nie Y, Zheng W S. Dual Illumination Estimation for Robust Exposure Correction[J]. Computer Graphics Forum, 2019, 38(7): 243-252. [68] Zuiderveld K . Contrast Limited Adaptive Histogram Equalization[J]. Graphics Gems, 1994:474-485. [69] Kanungo T, Mount D M, Netanyahu N S, et al. A local search approximation algorithm for k-means clustering[J].Computational Geometry, 2002, 28(23):89-112.
﹀
|
中图分类号: |
TP391.4
|
开放日期: |
2021-06-22
|