- 无标题文档
查看论文信息

论文中文题名:

 基于 YOLOv8 的矿井胶带巷火灾视频智能识别方法研究    

姓名:

 何地    

学号:

 21220226119    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 085700    

学科名称:

 工学 - 资源与环境    

学生类型:

 硕士    

学位级别:

 工程硕士    

学位年度:

 2024    

培养单位:

 西安科技大学    

院系:

 安全科学与工程学院    

专业:

 安全工程    

研究方向:

 消防科学与工程    

第一导师姓名:

 王伟峰    

第一导师单位:

 西安科技大学    

第二导师姓名:

 邢震    

论文提交日期:

 2024-06-17    

论文答辩日期:

 2024-06-02    

论文外文题名:

 Research on Intelligent Video Recognition Method for Mine Belt Lane Fire Based on YOLOv8    

论文中文关键词:

 YOLOv8 ; 胶带火灾 ; 智能识别 ; 视频识别    

论文外文关键词:

 YOLOv8 ; Tape fire ; Intelligent recognition ; Video recognition    

论文中文摘要:

矿井火灾作为煤矿五大灾害之一,导致国家人民财产损失严重,甚至有工人以及救援人员在火灾中牺牲。胶带火灾作为矿井火灾的主要形式之一,其具有易发性、隐蔽性、火势快速扩大、救援困难等特点,每年带来的损失不可估量。故保持煤矿胶带巷火情的时刻监察,在胶带火灾早期阶段控制火情减小损失对于重要,符合打造灭小的火情防控理念;人工监察存在偷懒懈怠、注意力不集中等情况,导致错过灭火关键时期的情况时有发生。故由机器代替人工监测,利用机器不疲倦、反应灵敏的特点可以更好的实现矿井胶带火灾的监测,也符合智能化矿山建设的政策背景,对推动煤矿智能化建设以及煤矿安全生产具有重要意义。

本文采用YOLOv8算法作为煤矿火灾检测算法,因煤矿胶带巷环境与地面环境差别较大,存在光源分布不均、视野暗、粉尘浓度大等识别难点。恶劣环境导致常规机器视觉模型在井下效果不佳,为提高模型在矿井胶带巷的识别效果,本文针对上述问题提出一种机器视觉算法改进的研究方法。本文的工作将针对这种研究方法展开,主要研究工作如下:

首先,制作矿井火灾数据集,因矿井火灾较为封闭,相关图片资料难以收集。为使本文模型训练数据集尽可能与矿井火灾场景相似,在网上权威实验室公开的隧道等密闭空间火灾数据集中优选出部分与煤矿火灾图像相像的图片;搭建矿井巷道模拟平台收集一部分矿井胶带燃烧图片;小部分煤矿火灾现场收集图片。上述三种不同来源图像共同构成本次课题工作的煤矿火灾数据集。为进一步丰富图像数据集样本,采用图像翻转、旋转、缩放、Mosaic数据增强等多种方式对数据集进行扩充。数据集经过扩充后样本大幅增加,为YOLOv8模型的充分训练奠定良好基石。

其次,为使模型达到更好的检测效果,将针对煤矿胶带巷环境特点对YOLOv8模型的网络结构进行改进。提出基于BiFormer注意力机制和BiFPN特征融合机制的两种改进方式,提升网络对输入图片特征信息的提取能力;并将模型的CIOU损失函数改进为GIOU损失函数,GIOU损失函数在此数据集中可加快了模型的收敛速度,增强了模型的鲁棒性。最终改进模型的检测速度相比原模型提升了0.4 ms,mAP值提升了7.4%,保证模型的检测速度的同时提升了模型性能。

最后,为改进YOLOv8检测算法使用便利性以及进一步验证算法性能,对本文进行设计并实现“矿井胶带火灾检测系统”,用PyQT5进行软件可视化界面编写,用瑞芯微RV1126作为硬件环境将检测算法模型嵌入到开发板中,完成检测软硬件系统。并在本论文中所描述的矿井巷道模拟平台中进行巷道火灾识别实验,进一步证明本文具有实际意义。

论文外文摘要:

As one of the five major disasters in coal mines, mine fires have caused serious property losses to the people of the country, and even workers and rescue personnel have sacrificed themselves in the fires. As one of the main forms of mine fires, tape fires have the characteristics of susceptibility, concealment, rapid expansion of fires, and difficulty in rescue, and the losses they bring every year are incalculable. Therefore, it is important to constantly monitor the fire situation in coal mine belt tunnels, control the fire situation in the early stage of belt fires, and reduce losses, which is in line with the concept of building a fire prevention and control system to extinguish small fires; There are situations of laziness and lack of concentration in manual supervision, which often lead to missing the critical period of firefighting. Therefore, replacing manual monitoring with machines can better achieve the monitoring of mine belt fires by utilizing their tireless and responsive characteristics. This is also in line with the policy background of intelligent mining construction, and is of great significance for promoting the intelligent construction of coal mines and coal mine safety production.

This article uses the YOLOv8 algorithm as the coal mine fire detection algorithm. Due to the significant difference between the coal mine belt tunnel environment and the ground environment, there are recognition difficulties such as uneven light source distribution, dark field of view, and high dust concentration. The harsh environment results in poor performance of conventional machine vision models underground. In order to improve the recognition performance of the model in mine belt tunnels, this paper proposes a research method for improving machine vision algorithms to address the above issues. The work of this project will focus on this research method, and the main research work is as follows:

Firstly, create a dataset on mine fires, as mine fires are relatively closed and it is difficult to collect relevant image data. In order to make the training dataset of this project model as similar as possible to the mine fire scene, some images that are similar to coal mine fire images were selected from the tunnel and other enclosed space fire datasets publicly available in authoritative online laboratories; Build a simulation platform for mine tunnels to collect some images of mine belt burning; Collect pictures of small coal mine fire scenes. The above three different sources of images together constitute the coal mine fire dataset for this project. To further enrich the image dataset samples, image flipping, rotation, scaling Various methods such as Mosaic data augmentation are used to expand the dataset. The dataset has been greatly expanded with a significant increase in samples, laying a solid foundation for sufficient training of the YOLOv8 model.

Secondly, in order to achieve better detection results, the network structure of the YOLOv8 model will be improved based on the characteristics of the coal mine belt tunnel environment. Propose two improved methods based on BiFormer attention mechanism and BiFPN feature fusion mechanism to enhance the network's ability to extract feature information from input images; And improve the CIOU loss function of the model to GIOU loss function, The GIOU loss function in this dataset can accelerate the convergence speed of the model and enhance its robustness. The detection speed of the final improved model increased by 0.4 ms compared to the original model, The mAP value increased by 7.4%, ensuring the detection speed of the model while improving its performance.

Finally, in order to improve the convenience of using the YOLOv8 detection algorithm and further validate the algorithm performance, this project designs and implements a "Mine Tape Fire Detection System". PyQT5 is used for software visualization interface writing, and Ruixin Micro RV1126 is used as the hardware environment to embed the detection algorithm model into the development board, completing the detection software and hardware system. And conducting tunnel fire identification experiments on the mine tunnel simulation platform described in this paper further proves the practical significance of this topic.

参考文献:

[1] 邓军, 张琦, 陈炜乐, 等. 矿井煤自燃灾害监测预警技术及发展趋势[J]. 煤矿安全, 2024, 55(03): 99-110.

[2] 张晓东, 葛红霞. 能源技术与管理[J]. 煤矿安全, 2012, (4): 124-126

[3] 邓军, 李鑫, 王凯, 等. 矿井火灾智能监测预警技术近20年研究进展及展望[J]. 煤炭科学技术, 2024, 52(01): 154-177.

[4] 刘东, 张浪, 姚海飞, 等. 煤矿火灾智能预警系统研发与应用[J]. 工矿自动化, 2024, 50(01): 1-8+16.

[5] 胡纪年, 李雨成, 李俊桥, 等. 基于CNN的矿井外因火灾火源定位方法研究[J]. 中国安全生产科学技术, 2024, 20(03): 134-140.

[6] 黄俊杰, 杨应迪, 黄建达. 矿井巷道火灾烟流流动规律及有效控制方案研究[J]. 煤炭科技, 2024, 45(01): 117-122+127.

[7] 郭军, 刘荫, 金永飞, 等. 矿井胶带火灾巷道环境多参数时空演化规律[J]. 西安科技大学学报, 2019, 39(01): 21-27.

[8] 苏墨, 王飞. 矿井胶带火灾模拟与逃生影响因素分析研究[J]. 中国矿业, 2017, 26(07): 168-172.

[9] 李凌云. 梁宝寺煤矿筑牢消防安全“防火墙”[J]. 山东国资, 2019(06):103.

[10] 重庆松藻煤矿“9·27”重大火灾事故37名公职人员被追责问责[J]. 消防界(电子版), 2021, 7(07): 44-45.

[11] 2020年全国生产安全事故十大典型案例[J].中国消防, 2021(01): 44-47.

[12] 侯欣然, 董宪伟, 王福生. 矿井胶带火灾的预警技术及预防措施[J]. 河北能源职业技术学院学报, 2013, 13(04): 56-57.

[13] 汪卫东, 嵇坚. 从胶带输送机火灾事故分析探讨胶带运输管理[J]. 煤炭科技, 2001(03): 49-50.

[14] 伍应文. 分布式线型光纤感温探测器在电力电缆运行状态监测中的应用[J]. 电气时代, 2021(09): 52-54.

[15] Jabar H I, Alwan M, Dawood F A, et al. Investigating the sensing of B2N2 monolayer towards Eriochrome Black T: a computational study[J]. Molecular Physics, 2023, 80(4): 4-6.

[16] Baeg K J, Binda M, Natali D, et al. Organic light detectors: photodiodes and phototransistors[J]. Advanced materials, 2013, 25(31): 4267-4295.

[17] Privett B J, Shin J H, Schoenfisch M H. Electrochemical sensors[J]. Analytical chemistry, 2008, 80(12): 4499-4517.

[18] 严波, 杨可军, 操松元, 等. 异步近红外传感器在火焰检测中的应用[J]. 信息技术, 2019, 43(09): 65-70+75.

[19] Chen J, He Y, Wang J. Multi-feature fusion based fast video flame detection[J]. Building and Environment, 2010, 45(5): 1113-1122.

[20] 王光清, 李文拴, 党佳琦, 等. 基于机器视觉的垃圾分类算法研究与应用[J]. 计算技术与自动化, 2024, 43(01): 78-83.

[21] Borges P, Izquierdo E. A Probabilistic Approach for Vision-Based Fire Detection in Videos[J]. IEEE Transactions on Circuits & Systems for Video Technology, 2010, 20(5):721-731.

[22] Treyin B U, Cinbi R G, Dedeolu Y, et al. Fire detection in infrared video using wavelet analysis[J]. Optical Engineering, 2007, 46(6): 7204.

[23] Kim Y H, Kim A, Jeong H Y. RGB Color Model Based the Fire Detection Algorithm in Video Sequences on Wireless Sensor Network[J]. International Journal of Distributed Sensor Networks, 2014, 2014:1-10.

[24] 王琳, 姚新, 雷丹.公路隧道火灾初期视频火焰检测[J]. 中国公路报, 2018, 31(11): 121-129.

[25] 熊昊, 李伟. 基于SVM的视频火焰检测算法[J]. 传感器与微系统, 2020, 39(01): 143-145+149.

[26] 许宏科, 秦严严, 杨伟松. 基于BP神经网络的公路隧道视频火焰识别[J]. 徐州工程学院学报(自然科学版), 2014, 29(04): 13-18.

[27] 袁庆辉. 基于图像识别的井下早期火灾探测方法的实现 [J]. 煤炭技术, 2011, 31(1): 105-107.

[28] Wang D C, Cui X, Park E, etal. Adaptive flame detection using randomness testing and robust features[J]. Fire Safety Journal, 2013, 55:116-125.

[29] 任杰英. 基于卷积神经网络的图像型火灾识别方法研究[D]. 西安:西安科技大学, 2019.

[30] Verstockt S, Poppe C, Hoecke S V, et al. Silhouette-based multi-sensor smoke detection[J]. Machine Vision and Applications, 2012, 23(6):1243-1262.

[31] Nguyen T, Kim J M. Multistage optical smoke detection approach for smoke alarm systems[J]. Optical Engineering, 2013, 52(5): 7001.

[32] Kim H, Ryu D, Park J. Smoke Detection Using GMM and Adaboost[J]. International Journal of Computer and Communication Engineering, 2014, 3(2):123-126.

[33] Zeng J, Lin Z, Qi C, et al. An improved object detection method based on deep convolution neural network for smoke detection[C]. 2018 International Conference on Machine Learning and Cybernetics (ICMLC), 2018: 184-189.

[34] Wang J, Song W, Wang W, et al. A new algorithm for forest fire smoke detection based on modis data in heilongjiang province[C]. 2011 international conference on remote sensing, environment and transportation engineering, 2011: 5-8.

[35] 邵坤艳. 基于视频图像的火灾检测方法研究[D]. 重庆:重庆大学, 2015.

[36] 张思齐. 基于视频图像的煤矿井下烟雾检测[D]. 西安:西安科技大学, 2019.

[37] CHUNYU, Yu, et al. Video fire smoke detection using motion and color features. Fire technology, 2010, 46: 651-663.

[38] 于嵩松, 曾二贤, 刘文勋, 等. 基于神经网络算法的经济型导管架结构冰振响应预测研究[J]. 石油工程建设, 2024,50(01):14-20.

[39] Abdi H. A neural network primer[J]. Journal of Biological Systems, 1994, 2(03): 247-281.

[40] Hirasawa K, Ohbayashi M, Koga M, et al. Forward propagation universal learning network[C]//Proceedings of international conference on neural networks (ICNN'96). IEEE, 1996, 1: 353-358.

[41] Sazli M H. A brief review of feed-forward neural networks[J]. Communications Faculty of Sciences University of Ankara Series A2-A3 Physical Sciences and Engineering, 2006, 50(01).

[42] Forti M, Nistri P. Global convergence of neural networks with discontinuous neuron activations[J]. IEEE Transactions on Circuits and Systems I: Fundamental Theory andApplications, 2003, 50(11): 1421-1435.

[43] Mavrovouniotis M, Yang S. Training neural networks with ant colony optimization algorithms for pattern classification[J]. Soft Computing, 2015, 19: 1511-1522.

[44] Du J. Understanding of object detection based on CNN family and YOLO[C]//Journal of Physics: Conference Series. IOP Publishing, 2018, 1004: 012029.

[45] Jiang P, Ergu D, Liu F, et al. A Review of Yolo algorithm developments[J]. Procedia computer science, 2022, 199: 1066-1073.

[46] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779-788.

[47] Shafiee M J, Chywl B, Li F, et al. Fast YOLO: A fast you only look once system for real-time embedded object detection in video[J]. arXiv preprint arXiv:1709.05943, 2017.

[48] Wu D, Liao M W, Zhang W T, et al. Yolop: You only look once for panoptic driving perception[J]. Machine Intelligence Research, 2022, 19(6): 550-562.

[49] Zhang S, Chi C, Yao Y, et al. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 9759-9768.

[50] Zhu X, Lyu S, Wang X, et al. TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 2778-2788.

[51] Bharati P, Pramanik A. Deep learning techniques—R-CNN to mask R-CNN: a survey[J]. Computational Intelligence in Pattern Recognition: Proceedings of CIPR 2019, 2020: 657-668.

[52] He K, Zhang X, Ren S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE transactions on pattern analysis and machine intelligence, 2015, 37(9): 1904-1916.

[53] Girshick R. Fast r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2015: 1440-1448.

[54] Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Advances in neural information processing systems, 2015, 28.

[55] Ren S, He K, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2016, 39(6): 1137-1149.

[56] Tian Z, Shen C, Chen H, et al. FCOS: A simple and strong anchor-free object detector[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44(4): 1922-1933.

[57] Kong T, Sun F, Liu H, et al. Foveabox: Beyound anchor-based object detection[J]. IEEE Transactions on Image Processing, 2020, 29: 7389-7398.

[58] Li S, Yang L, Huang J, et al. Dynamic anchor feature selection for single-shot object detection[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 6609-6618.

[59] Wu D, Liao M W, Zhang W T, et al. Yolop: You only look once for panoptic driving perception[J]. Machine Intelligence Research, 2022, 19(6): 550-562.

[60] Sawada Y, Nakamura K. Concept bottleneck model with additional unsupervised concepts[J]. IEEE Access, 2022, 10: 41758-41765.

[61] Hochreiter S, Bengio Y, Frasconi P, et al. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies[J]. 2001.

[62] Li S, Xiao Z, Zhang Y, et al. Impact analysis of a honeycomb-filled motorcycle helmet based on coupled head-helmet modelling[J]. International Journal of Mechanical Sciences, 2021, 199: 106406.

[63] Wang H, Jin Y, Ke H, et al. DDH-YOLOv5: improved YOLOv5 based on Double IoU-aware Decoupled Head for object detection[J]. Journal of Real-Time Image Processing, 2022, 19(6): 1023-1033.

[64] Li Q, Xiao D, Shi F. A Decoupled Head and Coordinate Attention Detection Method for Ship Targets in SAR Images[J]. IEEE Access, 2022, 10: 128562-128578.

[65] Abiodun O I, Jantan A, Omolara A E, et al. State-of-the-art in artificial neural network applications: A survey[J]. Heliyon, 2018, 4(11).

[66] Lin T Y, Dollr P, Girshick R, et al. Feature pyramid networks for object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2117-2125.

[67] Deng C, Wang M, Liu L, et al. Extended feature pyramid network for small object detection[J]. IEEE Transactions on Multimedia, 2021, 24: 1968-1979.

[68] Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Advances in neural information processing systems, 2015, 28.

[69] Vu T, Jang H, Pham T X, et al. Cascade rpn: Delving into high-quality region proposal network with adaptive convolution[J]. Advances in neural information processing systems, 2019, 32.

[70] Sun Q S, Zeng S G, Liu Y, et al. A new method of feature fusion and its application in image recognition[J]. Pattern Recognition, 2005, 38(12): 2437-2448.

[71] Fu Y, Cao L, Guo G, et al. Multiple feature fusion by subspace learning[C]// Proceedings of the 2008 international conference on Content-based image and video retrieval. 2008: 127-134.

[72] 吴则举, 宋丽君, 冀杨. 基于改进特征金字塔的轮胎X光图像缺陷检测[J]. 计算机工程与应用, 2024, 60(03): 270-279.

[73] 李曜丞, 牛清林, 王思源, 等. 基于PANet网络和先验知识的变压器表计图像识别算法[J]. 变压器, 2023, 60(10): 17-23.

[74] Syazwany N S, Nam J H, Lee S C. MM-BiFPN: multi-modality fusion network with Bi-FPN for MRI brain tumor segmentation[J]. IEEE Access, 2021, 9: 160708-160720.

[75] Chen J, Mai H S, Luo L, et al. Effective feature fusion network in BIFPN for small object detection[C]//2021 IEEE international conference on image processing (ICIP). IEEE, 2021: 699-703.

[76] Feng Y, Xu H, Jiang J, et al. ICIF-Net: Intra-scale cross-interaction and inter-scale feature fusion network for bitemporal remote sensing images change detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1-13.

[77] Wang C, Xing X, Wu Y, et al. Dcsfn: Deep cross-scale fusion network for single image rain removal[C]//Proceedings of the 28th ACM international conference on multimedia. 2020: 1643-1651.

[78] Wang C, Li S, Liu Y, et al. Cross-scale feature fusion-based JND estimation for robust image watermarking in quaternion DWT domain[J]. Optik, 2023, 272: 170371.

[79] Wang K, Liew J H, Zou Y, et al. Panet: Few-shot image semantic segmentation with prototype alignment[C]//proceedings of the IEEE/CVF international conference on computer vision. 2019: 9197-9206.

[80] Melikyan G B, Markosyan R M, Hemmati H, et al. Evidence that the transition of HIV-1 gp41 into a six-helix bundle, not the bundle configuration, induces membrane fusion[J]. The Journal of cell biology, 2000, 151(2): 413-424.

[81] Tan M, Pang R, Le Q V. Efficientdet: Scalable and efficient object detection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 10781-10790.

[82] Haas B J, Dobin A, Stransky N, et al. STAR-Fusion: fast and accurate fusion transcript detection from RNA-Seq[J]. BioRxiv, 2017: 120295.

[83] 侯越千, 张丽红. 基于Transformer的多尺度物体检测[J]. 测试技术学报, 2023, 37(04): 342-347.

[84] 孙祖鑫, 吴强.一种基于TS201的归一化互相关快速算法[J]. 现代电子技术, 2010, 33(10): 125-127.

[85] 陈怀玉, 杨旸. 基于归一化梯度相位相关的大尺度图像快速配准算法[J]. 模式识别与人工智能, 2015, 28(08): 694-701.

[86] 曹城硕, 袁杰. 基于YOLO-Mask算法的口罩佩戴检测方法[J]. 激光与光电子学进展, 2021, 58(08): 211-218.

[87] 雷鹏程, 刘丛, 唐坚刚, 等. 分层特征融合注意力网络图像超分辨率重建[J]. 中国图象图形学报, 2020, 25(09): 1773-1786.

[88] Carbonell J G, Michalski R S, Mitchell T M. An overview of machine learning[J]. Machine learning, 1983: 3-23.

[89] Kononenko I. Machine learning for medical diagnosis: history, state of the art and perspective[J]. Artificial Intelligence in medicine, 2001, 23(1): 89-109.

[90] 刘曙光, 郑崇勋, 刘明远. 前馈神经网络中的反向传播算法及其改进: 进展与展望[J]. 计算机科学, 1996(01): 76-79.

[91] 郑远攀, 李广阳, 李晔. 深度学习在图像识别中的应用研究综述[J]. 计算机工程与应用, 2019, 55(12): 20-36.

[92] Christoffersen P, Jacobs K. The importance of the loss function in option valuation[J]. Journal of Financial Economics, 2004, 72(2): 291-318.

[93] Schorfheide F. Loss function‐based evaluation of DSGE models[J]. Journal of Applied Econometrics, 2000, 15(6): 645-670.

[94] Wang Q, Ma Y, Zhao K, et al. A comprehensive survey of loss functions in machine learning[J]. Annals of Data Science, 2020: 1-26.

[95] Xu J, Zhang Z, Friedman T, et al. A semantic loss function for deep learning with symbolic knowledge[C]//International conference on machine learning. PMLR, 2018: 5502-5511.

[96] Wan J, Liu Z, Chan A B. A generalized loss function for crowd counting and localization[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 1974-1983.

中图分类号:

 TD76    

开放日期:

 2024-06-17    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式