论文中文题名: | 面向无人驾驶的目标检测与分割算法研究 |
姓名: | |
学号: | 21206223051 |
保密级别: | 公开 |
论文语种: | chi |
学科代码: | 085400 |
学科名称: | 工学 - 电子信息 |
学生类型: | 硕士 |
学位级别: | 工程硕士 |
学位年度: | 2024 |
培养单位: | 西安科技大学 |
院系: | |
专业: | |
研究方向: | 计算机视觉 |
第一导师姓名: | |
第一导师单位: | |
论文提交日期: | 2024-06-24 |
论文答辩日期: | 2024-06-06 |
论文外文题名: | Research on Road Object Detection and Segmentation Algorithms for Unmanned Driving |
论文中文关键词: | |
论文外文关键词: | Unmanned driving ; Object detection ; Drivable area segmentation ; Multi-task learning ; Environment perception |
论文中文摘要: |
在科技飞速发展的当下,人工智能与计算机视觉技术的持续进步使得无人驾驶领域成为业界瞩目的焦点。其中,道路目标检测和可行驶区域分割技术是无人驾驶环境感知的关键。本文聚焦城市道路可行驶区域分割,通过深度神经网络提取图像特征,结合目标检测、语义分割和多任务学习技术,优化行人车辆目标检测和可行驶区域分割算法。基于多任务学习的思想,融合道路目标检测和可行驶区域分割模型,设计一种多任务的道路可行驶区域检测模型。 主要研究内容为:(1)针对道路目标检测的图像往往包含着许多的小目标和相互遮挡的目标,检测时容易出现误检、漏检的情况,在YOLOv7的基础上构建基于残差结构的目标检测算法YOLO-RM。在ELAN模块的基础上,基于残差结构设计ELAN_R和ELAN_Es模块,增强空间特征信息的提取,避免因网络层数加深所带来的学习误差;在FPN结构的基础上设计多尺度自适应特征融合结构MFPN,增加特征融合网络的小目标特征输出层,提高对不同尺度上的人车目标特征提取能力,降低小目标的漏检率。(2)为提高语义分割算法的推理速度,在DeepLabv3+的基础上构建实时可行驶区域分割算法DeepLab-SF。使用跨步卷积替换ASPP中的空洞卷积设计跨步空间金字塔池化模块,以减少算法在池化阶段的计算量;在解码器部分设计特征融合网络,有效地结合来自语义分支和空间分支的浅层和深层特征。(3)设计一个基于视觉的多任务道路可行驶区域检测模型,该模型将目标检测与可行驶区域分割两大任务融为一体,通过共享一个高效的特征提取网络,同步实现对车辆前方可行驶空间的精确感知及行人、车辆目标的高效识别。 本文所提的多任务道路可行驶区域检测模型能够完成无人驾驶环境感知的要求,人车目标检测精度mAP为78.1%,可行驶区域分割精度mIoU为91.1%,推理速度为36.4/帧,多任务模型性能满足无人驾驶环境感知的精度和实时性要求。 |
论文外文摘要: |
With the rapid development of science and technology, the continuous progress of artificial intelligence and computer vision technology has made the field of unmanned driving the focus of the industry. Among them, visual road pedestrian and vehicle detection and drivable area segmentation technology are the key to the perception of unmanned driving environment. This paper focuses on the segmentation of drivable areas of urban roads. It extracts image features through deep neural networks, combines target detection, semantic segmentation and multi-task learning technology to optimize pedestrian and vehicle target detection and drivable area segmentation algorithms. Based on the idea of multi-task learning, the road target detection and drivable area segmentation models are integrated to design a multi-task road drivable area detection model. The primary research components are as follows: (1) Given the challenges posed by mutual occlusion and significant scale variations in pedestrian and vehicle objects, which often lead to false detections and missed detections, we've enhanced YOLOv7 by introducing a residual structure-based object detection algorithm called YOLO-RM. Based on the ELAN module, we've designed the ELAN_R and ELAN_Es modules, leveraging the residual structure to better capture spatial feature information and mitigate learning errors caused by increasing network depth. Additionally, we've devised a multi-scale adaptive feature fusion structure called MFPN, inspired by the FPN structure, and added a small object feature output layer to the feature fusion network. This enhances the network's ability to extract features from pedestrian and vehicle objects at varying scales, thereby reducing the missed detection rate for small objects. (2) To boost the computational speed of the semantic segmentation algorithm, we've constructed a real-time drivable area segmentation algorithm named DeepLab-SF, building upon DeepLabv3+. We replaced dilated convolution with strided convolution in ASPP and introduced the strided spatial pyramid pooling module to minimize computational complexity during the pooling phase. Furthermore, a feature fusion network has been designed in the decoder section to seamlessly integrate data from both the semantic and spatial branches, blending shallow and deep features. (3) We've devised a vision-based multi-task detection scheme for urban road drivable areas, integrating urban road object detection with road segmentation. This approach employs a feature extraction network for forward propagation, enabling the simultaneous perception of the drivable area ahead and the acquisition of vehicle and pedestrian object information. The multi-task road drivable area segmentation model we proposed fulfills the demands of driverless environment perception. The pedestrian-vehicle target detection accuracy mAP is 78.1%, the drivable area segmentation accuracy mIoU is 91.1%, and the inference speed is 36.4/frame. The performance of the multi-task model meets the accuracy and real-time requirements of driverless environment perception. |
参考文献: |
[1] 张天培. 2023年机动车保有量4.35亿辆[N]. 人民日报, 2024-02-13(1). [2] 中国汽车技术研究中心, 同济大学, 百度 Apollo. 自动驾驶汽车交通安全白皮书[R]. 北京: 中国汽车技术研究中心有限公司, 2021. [3] 蒋凯伟. 基于深度学习的智能驾驶目标检测技术研究[D]. 北京: 北京交通大学, 2023. [4] 龚建伟, 龚乘, 林云龙, 等. 智能车辆规划与控制策略学习方法综述[J]. 北京理工大学学报, 2022, 42(7): 665-674. [5] 彭湃, 耿可可, 王子威, 等. 智能汽车环境感知方法综述[J]. 机械工程学报, 2023, 59(20): 281-303. [6] 彭育辉, 江铭, 马中原, 等. 汽车自动驾驶关键技术研究进展[J]. 福州大学学报(自然科学版), 2021, 49 (5): 691-703. [7] 段续庭, 周宇康, 田大新, 等. 深度学习在自动驾驶领域应用综述[J]. 无人系统技术, 2021, 4(6): 1-27 [8] 张凯祥, 朱明. 基于YOLOv5的多任务自动驾驶环境感知算法[J]. 计算机系统应用, 2022, 31(9): 226-232. [18] 申铉京, 李涵宇, 黄永平, 等. 基于自适应多尺度特征融合网络的车辆检测方法[J]. 电子学报, 2023, 31(3): 1-9. [19] 冉险生, 苏山杰, 陈俊豪, 等. 自适应特征融合的复杂道路场景目标检测算法[J]. 计算机工程与应用, 2023, 59(24): 216-226. [21] 李国进, 胡洁, 艾矫燕. 基于改进SSD算法的车辆检测[J]. 计算机工程, 2022, 48(01): 266-274. [22] 杜娟, 崔少华, 晋美娟, 等. 改进YOLOv7的复杂道路场景目标检测算法[J]. 计算机工程与应用, 2024, 60(1): 96-103. [23] 李安达, 吴瑞明, 李旭东. 改进YOLOv7的小目标检测算法研究[J]. 计算机工程与应用, 2024, 60(1): 122-134. [27] 厍向阳, 李蕊心, 叶鸥. 融合随机擦除和残差注意力网络的行人重识别[J]. 计算机工程与应用, 2022, 58(3): 215-221. [28] 高昂, 梁兴柱, 夏晨星, 等. 一种改进YOLOv8的密集行人检测算法[J]. 图学学报, 2023, 44(5): 890-898. [29] 王泽宇, 徐慧英, 朱信忠, 等. 基于YOLOv8改进的密集行人检测算法:MER-YOLO [J]. 计算机工程与科学, 2023, 42(11): 1-17. [30] 丁俊进. 基于机器视觉道路识别技术的研究[D]. 武汉: 武汉理工大学, 2007. |
中图分类号: | TP391 |
开放日期: | 2024-06-24 |