- 无标题文档
查看论文信息

论文中文题名:

 农田杂草识别方法研究    

姓名:

 刘洲    

学号:

 20207223071    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 085400    

学科名称:

 工学 - 电子信息    

学生类型:

 硕士    

学位级别:

 工程硕士    

学位年度:

 2023    

培养单位:

 西安科技大学    

院系:

 通信与信息工程学院    

专业:

 电子与通信工程    

研究方向:

 深度学习在杂草识别中的应用    

第一导师姓名:

 冀汶莉    

第一导师单位:

 西安科技大学    

论文提交日期:

 2023-06-16    

论文答辩日期:

 2023-05-30    

论文外文题名:

 Research on weed identification method in farmland    

论文中文关键词:

 杂草识别模型 ; YOLOv5s ; 基于标准化的注意力模块NAM ; 轻量化识别模型 ; EIOU    

论文外文关键词:

 Weed identification ; YOLOv5s ; Normalization-based Attention Module NAM ; Lightweight recognition model ; EIOU    

论文中文摘要:

农田除草主要采用喷洒农药的方式,不但影响农作物生长,而且会对土壤和地下水环境造成一定污染,精准快速的识别农田杂草是智能化除草的前提。在田间复杂环境下存在不同光照以及作物与杂草相互遮挡等情况,现有的杂草识别模型识别精度和效果欠佳。因此,本文以玉米幼苗以及周围伴生杂草为研究对象,建立了基于YOLOv5s的杂草识别模型并对其轻量化。

本文的主要研究工作如下:

(1)采用自主设计的田间作物和杂草采集模块搭载在巡检机器人上,采集了玉米3至5叶期幼苗和杂草的图像数据并进行了图像增强。针对部分图像在光照影响下图像不清晰的问题,采用MSRCR算法对图像前景进行特征增强。对全部图像进行了旋转、裁剪等数据增强生成了4000张玉米幼苗、杂草图像数据集,并对其进行标注。

(2)针对农田环境下杂草多样性和作物与杂草相互遮挡不易识别的问题,建立了基于YOLOv5s的改进杂草识别模型NE-YOLOv5s。将基于标准化的注意力模块(Normalization-based Attention Module,NAM)嵌入到YOLOv5s的Neck层的末端,增强模型对目标的关注。采用EIOU(Efficient Intersection of Union)损失函数改进了YOLOv5s的定位损失函数。实验结果表明,NE-YOLOv5s模型识别准确率为95.3%,召回率为95.2%,平均准确率为97.4%,可以准确识别出多种环境下的杂草和作物。

(3)针对改进后的杂草识别模型参数多、体积大,不利于在移动设备端部署的问题,对NE-YOLOv5s杂草识别模型进行轻量化设计。采用轻量级网络PP-LCNet重构原有的主干网络CSPdarkNet53,降低了模型的参数量和运算量。使用包含Ghost卷积的GhostBottleneck模块替换了原Neck层C3模块的Bottleneck模块,进一步减少参数。将PP-LCNet中SE注意力机制的激活函数Hard-Sigmoid优化为SiLU激活函数,提高模型识别精度。与其它优秀的轻量化模型和表现良好的文献方法进行对比实验。实验结果表明,轻量化后的杂草识别模型体积为6.23MB,相比YOLOv5s缩小了54.5%,检测速度缩短到118.1ms,识别准确率为97.3%,模型性能优于其它轻量化方法,达到了轻量化的识别要求。

本文建立的杂草识别模型识别精度高,模型体积小,可以实时检测,具有实际的应用价值。研究成果为实现农田杂草的精准喷洒农药以及智能化农田除草设备提供技术支持。

论文外文摘要:

Weed control in farmland mainly adopts pesticide spraying, which not only affects the growth of crops, but also causes certain pollution to the soil and groundwater environment. Accurate and fast identification of farmland weeds is a prerequisite for intelligent weed control. In the complex environment of the field, there are different lighting and mutual shading between crops and weeds, and the existing weed identification models have poor recognition accuracy and effect. Therefore, in this thesis, a weed recognition model based on YOLOv5s is established and lightened with corn seedlings and accompanying weeds around the field as the research object.

The main research work of this thesis is as follows:

Using an independently designed field crop and weed acquisition module mounted on the inspection robot, image data of corn seedlings and weeds at the 3 to 5 leaf stage were collected and image enhancement was performed. The MSRCR algorithm was used to enhance the image foreground features for the problem that some images are not clear under the influence of light. All images were rotated and cropped to generate a dataset of 4000 corn seedling and weed images, which were annotated.

To address the issues of weed diversity and difficulty in identifying crop weed interactions in agricultural environments, an improved weed identification model NE-YOLOv5s based on YOLOv5s was established. Embed the Normalization based Attention Module (NAM) into the end of the Neck layer of YOLOv5s to enhance the model's focus on the target. The EIOU (Efficient Intersection of Union) Loss function is used to improve the positioning Loss function of YOLOv5s. The experimental results show that the NE-YOLOv5s model has a recognition accuracy of 95.3%, a recall rate of 95.2%, and an average accuracy of 97.4%, which can accurately identify weeds and crops in various environments.

The NE-YOLOv5s weed recognition model is designed to be lightweight for the problem that the improved weed recognition model has many parameters and is large in size, which is unfavorable for deployment on mobile devices. The lightweight network PP-LCNet is used to reconstruct the original backbone network CSPdarkNet53, which reduces the number of parameters and the amount of operations of the model. The Bottleneck module of the original Neck layer C3 module is replaced with the GhostBottleneck module containing Ghost convolution to further reduce the parameters. The activation function Hard-Sigmoid of SE attention mechanism in PP-LCNet is optimized to SiLU activation function to improve the model recognition accuracy. The experiments are compared with other excellent lightweighting models and well-performing literature methods. The experimental results show that the volume of the light-weighted weed recognition model is 6.23MB, which is 54.5% smaller than YOLOv5s, the detection speed is shortened to 118.1ms, and the recognition accuracy is 97.3%, and the model performance is better than other light-weighted methods and achieves the light-weighted recognition requirements.

The weed recognition model established in this thesis has high recognition accuracy, small model size, can be detected in real time, and has practical application value. The research results provide technical support for the realization of accurate pesticide spraying of farmland weeds and intelligent farmland weeding equipment.

参考文献:

[1]Hu K, Wang Z, Coleman G, et al. Deep learning techniques for in-crop weed identification: A Review[J]. arXiv:2103.14872.

[2]孟庆宽,张漫,杨晓霞,等.基于轻量卷积结合特征信息融合的玉米幼苗与杂草识别[J]. 农业机械学报,2020,51(12):238–245.

[3]彭文,兰玉彬,岳学军,等.基于深度卷积神经网络的水稻田杂草识别研究[J].华南农业大学学报,2020,41(6):75-81.

[4]袁洪波,赵努东,程曼.基于图像处理的田间杂草识别研究进展与展望[J].农业机械学报, 2020,51(2):323-334.

[5]Liu B, Bruch R. Weed detection for selective spraying: a review[J]. Current Robotics Reports, 2020, 1: 19-26.

[6]孙俊,谭文军,武小红,等.多通道深度可分离卷积模型实时识别复杂背景下甜菜与杂草[J].农业工程学报,2019,35(12):184-190.

[7]FarmWise Labs Inc. Method for autonomously weeding crops in an agricultural field: US10455826B2[P]. 2019.10.29.

[8]MCCOOL C,BEATTIE J,FIRN J, et al. Efficacy of mechanical weeding tools:a study into alternative weed management strategies enabled by robotics[J].IEEE Robotics and Automation Letters,2018,3(2):1184-1190.

[9]金小俊,陈勇,侯学贵,等.基于机器视觉的除草机器人杂草识别[J].山东科技大学学报(自然科学版),2012,31(2):104-108.

[10]刘继展,李萍萍,毛罕平,等.一种激光除草机器人[P].江苏省:CN101589705B,2011-06-01.

[11]李碧青,朱强,郑仕勇,等.杂草自动识别除草机器人设计——基于嵌入式Web和ZigBee网关[J].农机化研究,2017,39(01):217-221+226.

[12]兰玉彬,王天伟,陈盛德等.农业人工智能技术:现代农业科技的翅膀[J].华南农业大学学报,2020,41(06):1-13.

[13]Peteinatos G G, Reichel P, Karouta J, et al. Weed identification in maize, sunflower, and potatoes with the aid of convolutional neural networks[J]. Remote Sensing, 2020, 12(24): 4185.

[14]Andrea C C, Daniel B B M , Misael J B J.Precise weed and maize classification through convolutional neuronalnetworks[C]//IEEE Second Ecuador Technical Chapters Meeting. IEEE,2017.

[15]Osorio K, Puerto A, Pedraza C, et al. A deep learning approach for weed detection in lettuce crops using multispectral images[J]. AgriEngineering, 2020, 2(3): 471-488.

[16]Bakhshipour A, Jafari A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features[J]. Computers and Electronics in Agriculture, 2018, 145: 153-160.

[17]Espinoza M A M, Le C Z, Raheja A, et al. Weed identification and removal using machine learning techniques and unmanned ground vehicles[C]//Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping V. SPIE, 2020, 11414: 109-118.

[18]Shah T M, Nasika D P B, Otterpohl R. Plant and weed identifier robot as an agroecological tool using artificial neural networks for image identification[J]. Agriculture, 2021, 11(3): 222.

[19]王璨,武新慧,李志伟.基于卷积神经网络提取多尺度分层特征识别玉米杂草[J].农业工程学报,2018,34(5):144—151.

[20]孙俊,何小飞,谭文军,等.空洞卷积结合全局池化的卷积神经网络识别作物幼苗与杂草[J].农业工程学报,2018,34(11):159-165.

[21]姜红花,张传银,张昭,等.基于Mask R-CNN的玉米田间杂草检测方法[J].农业机械学报,202 0,51 (6):220-228,247.

[22]彭明霞,夏俊芳,彭辉.融合FPN的Faster R-CNN复杂背景下棉田杂草高效识别方法[J].农业工程学报,2019,35(20): 202-209.

[23]孙哲,张春龙,葛鲁镇,等.基于Faster R-CNN的田间西兰花幼苗图像检测方法[J].农业机械学报,2019(7).

[24]樊湘鹏,周建平,许燕,等.基于优化Faster R-CNN的棉花苗期杂草识别与定位[J]农业机械学报,2021,52(05):26-34.

[25]邓向武,齐龙,马旭,等. 基于多特征融合和深度置信网络的稻田苗期杂草识别[J]. 农业工程学报,2018,34(14):165-172.

[26]孙艳霞,陈燕飞,金小俊,等.基于人工智能的青菜幼苗与杂草识别方法[J].福建农业学报,2021,36(12):1484-1490.

[27]Li Y, Huang H, Xie Q, et al. Research on a surface defect detection algorithm based on MobileNet-SSD[J]. Applied Sciences,2018,8(9):1678.

[28]Sandler M, Howard A, Zhu M, et al. MobileNetV2: Inverted residuals and linear bottlenecks[C]//In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2018,4510-4520.

[29]Howard A, Sandler M, Chu G, et al. Searching for mobilenetv3[C]//In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, 1314-1324.

[30]Ma N, Zhang X, Zheng H T, et al. Shufflenet v2: Practical guidelines for efficient cnn architecture design[C]//Proceedings of the European conference on computer vision. 2018: 116-131.

[31]Han K, Wang Y, Tian Q, et al. Ghostnet: More featues from cheap operations[C] //Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 1577-1586.

[32]Prasetyo E, Suciati N, Fatichah C. Yolov4-tiny with wing convolution layer for detecting fish body part[J]. Computers and Electronics in Agriculture, 2022, 198: 107023.

[33]Wieczorek M, Siłka J, Woźniak M, et al. Lightweight convolutional neural network model for human face detection in risk situations[J]. IEEE Transactions on Industrial Informatics, 2021, 18(7): 4820-4829.

[34]Han B G, Lee J G, Lim K T, et al. Design of a scalable and fast YOLO for edge-computing devices[J]. Sensors, 2020, 20(23): 6779.

[35]Masud M. A light-weight convolutional Neural Network Architecture for classification of COVID-19 chest X-Ray images[J]. Multimedia systems, 2022, 28(4): 1165-1174.

[36]Thakur P S, Sheorey T, Ojha A. VGG-ICNN: A Lightweight CNN model for crop disease identification[J]. Multimedia Tools and Applications, 2023, 82(1): 497-520.

[37]Thomkaew J, Intakosum S. Improvement Classification Approach in Tomato Leaf Disease using Modified Visual Geometry Group(VGG)-InceptionV3[J]. International Journal of Advanced Computer Science and Applications, 2022, 13(12).

[38]王林,汪钰婷.基于加强特征融合的轻量化船舶目标检测.计算机系统应用,2023,32(2):288-294.

[39]杨永波,李栋.改进YOLOv5的轻量级安全帽佩戴检测算法[J].计算机工程与应用,2022,58(09):201-207.

[40]王子鹏,张荣芬,刘宇红,等.面向边缘计算设备的改进型YOLOv3垃圾分类检测模型[J].激光与光电子学进展,2022,59(4):291-300.

[41]张凡,张鹏超,王磊等.基于YOLOv5s的轻量化朱鹮检测算法研究[J].西安交通大学学报,2023,57(01):110-121.

[42]史先鹏,王宏妫.基于YOLOv4改进的轻量级水下目标检测网络[J].哈尔滨工程大学学报,2023,44(01):154-160.

[43]Gao J, Tan F, Cui J, et al. A Method for Obtaining the Number of Maize Seedlings Based on the Improved YOLOv4 Lightweight Neural Network[J]. Agriculture, 2022, 12(10): 1679.

[44]徐艳蕾,何润,翟钰婷等.基于轻量卷积网络的田间自然环境杂草识别方法[J].吉林大学学报(工学版),2021,51(06):2304-2312.

[45]高嘉南,侯凌燕,杨大利,等.基于轻量级网络和数据扩增的作物与杂草识别[J].北京信息科技大学学报自然科学版,2022,37(1):82-89,95.

[46]Wang Z, Guo J, Zhang S. Lightweight Convolution Neural Network Based on Multi-Scale Parallel Fusion for Weed Identification[J]. International Journal of Pattern Recognition and Artificial Intelligence, 2022, 36(07): 2250028.

[47]刘莫尘,高甜甜,马宗旭等.基于MSRCR-YOLOv4-tiny的田间玉米杂草检测模型[J].农业机械学报,2022,53(02):246-255+335.

[48]Bisa S. Brain Tumor Detection and Segmentation Using R-CNN[D]. Lamar University-Beaumont, 2020.

[49]Arora N, Kumar Y, Karkra R, et al. Automatic vehicle detection system in different environment conditions using fast R-CNN[J]. Multimedia Tools and Applications, 2022: 1-21.

[50]Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Advances in neural information processing systems, 2015, 28.

[51]Jiang Z P, Liu Y Y, Shao Z E, et al. An Improved VGG16 Model for Pneumonia Image Classification[J].Applied Sciences, 2021,11(23): 11185.

[52]Ghosh R.On-road vehicle detection in varying weather conditions using faster R-CNN with several region proposal networks[J]. Multimedia Tools and Applications, 2021,80(17): 25985-25999.

[53]Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2016: 779-788.

[54]Redmon J, Farhadi A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2017: 7263-7271.

[55]Redmon J, Farhadi A. YOLOV3: An incremental improvement[J]. Optics Laser Technology, 2018, 23(03): 44-57.

[56]Alexey B, Wang C Y, Hong Y. Yolov4: optimal speed and accuracy of object detection[J]. Computer Vision and Pattern Recognition, 2020, 10(9): 34-51.

[57]罗志聪,李鹏博,宋飞宇等.嵌入式设备的轻量化百香果检测模型[J].农业机械学报,2022,53(11):262-269+322.

[58]马宁,曹云峰,王指辉,等.基于YOLOv5网络架构的着陆跑道检测算法研究[J].激光与光电子学进展,2022,59(14).199-205.

[59]邱天衡,王玲,王鹏,等.基于改进YOLOv5的目标检测算法研究[J].计算机工程与应用,2022,58(13):63-73.

[60]Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]//International conference on machine learning. pmlr, 2015: 448-456.

[61]Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 7464-7475.

[62]WANG Q, WU B, ZHU P, et al. Supplementary material for ECA-Net:Efficient channel attention for deep convolutional neural networks[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Patterm Recognition, IEEE, Seattle, WA, USA. 2020:13-19.

[63]Woo S, Park J, Lee J Y, et al. Cbam: Convolutional block attention module[C]//Pr-oceedings of the European conference on computer vision (ECCV). 2018: 3-19.

[64]吕晓玲,杨胜月,张明路,等.改进YOLOv5网络的鱼眼图像目标检测算法[J].计算机工程与应用,2023,59(06):241-250.

[65]Ji W, Pan Y, Xu B, et al. A real-time Apple targets detection method for picking robot based on ShufflenetV2-YOLOX[J]. Agriculture, 2022, 12(6): 856.

[66]杨文静,陈明,冯国富.基于图像增强的水下视频鱼类识别方法[J].激光与光电子学进展,2021,58(22):294-303.

[67]胡振宇,陈琦,朱大奇.基于颜色平衡和多尺度融合的水下图像增强[J].光学精密工程,2022,30(17):2133-2146.

[68]Liu Y, Shao Z, Teng Y, et al. NAM: Normalization-based attention module[J]. arXiv 2021, arXiv:2111.12419.

[69]Zhang Y F, Ren W, Zhang Z, et al. Focal and efficient iou loss for accurate bounding box regression[J].Computer Vision & Pattern Recognition. arXiv: 2101.08158, 2021.

[70]Chen S, Zou X, Zhou X, et al. Study on fusion clustering and improved YOLOv5 algorithm based on multiple occlusion of Camellia oleifera fruit[J]. Computers and Electronics in Agriculture, 2023, 206: 107706.

[71]Zhang J L, Su W H, Zhang H Y, et al. SE-YOLOv5x: An Optimized Model Based on Transfer Learning and Visual Attention Mechanism for Identifying and Localizing Weeds and Vegetables[J]. Agronomy, 2022, 12(9): 2061.

[72]CUI C, GAO T, WEI S, et al.PP-LCNet: A Lightweight CPU Convolutional Neural Network[J]. arXiv 2021, arXiv: 2109.15099.

[73]Gholamalinejad H, Khosravi H. Vehicle classification using a real-time convolutional structure based on DWT pooling layer and SE blocks[J]. Expert Systems with Applications, 2021, 183: 115420.

[74]王卫星,刘泽乾,高鹏等.基于改进YOLOv4的荔枝病虫害检测模型[J].农业机械学报,2023,54(05):227-235.

[75]李衍照,于镭,田金文.基于改进YOLOv5的金属焊缝缺陷检测[J].电子测量技术,2022,45(19):70-75.

[76]李晓艳,符惠桐,牛文涛,等.基于深度学习的多模态行人检测算法[J].西安交通大学学报,2022,56(10):61-70.

[77]梁继然,陈壮,董国军,等.基于Ghost卷积和通道注意力机制级联结构的车辆检测方法研究[J].天津大学学报(自然科学与工程技术版),2023,56(2):193-199.

[78]Hussain M, Chen T, Hill R. Moving toward smart manufacturing with an autonomous pallet racking inspection system based on MobileNetV2[J]. Journal of Manufacturing and Materials Processing, 2022, 6(4): 75.

[79]Ying B, Xu Y, Zhang S, et al. Weed Detection in Images of Carrot Fields Based on Improved YOLO v4[J]. Traitement du Signal, 2021, 38(2):341-348.

中图分类号:

 TP391.4    

开放日期:

 2023-06-19    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式