论文中文题名: |
矿井下多传感器融合SLAM技术研究
|
姓名: |
刘海涛
|
学号: |
20210226074
|
保密级别: |
公开
|
论文语种: |
chi
|
学科代码: |
085215
|
学科名称: |
工学 - 工程 - 测绘工程
|
学生类型: |
硕士
|
学位级别: |
工程硕士
|
学位年度: |
2023
|
培养单位: |
西安科技大学
|
院系: |
测绘科学与技术学院
|
专业: |
测绘工程
|
研究方向: |
同步定位与地图构建
|
第一导师姓名: |
龚云
|
第一导师单位: |
西安科技大学
|
第二导师姓名: |
姜刚
|
论文提交日期: |
2023-06-16
|
论文答辩日期: |
2023-06-04
|
论文外文题名: |
Research on multi-sensor fusion SLAM technology in underground mines
|
论文中文关键词: |
多传感器融合 ; 矿井 ; SLAM ; 融合定位 ; 地图构建
|
论文外文关键词: |
Multi-sensor fusion ; mine ; SLAM ; fusion localization ; map construction
|
论文中文摘要: |
︿
随着物联网、人工智能、大数据技术的快速发展,智慧矿山和数字矿山的建设成为矿山行业内的研究热点。同步定位与地图构建(Simultaneous Localization and Mapping, SLAM)技术可以利用机器人代替人在矿山巷道内进行定位和环境探测,为智慧矿山的建设提供基础保障。然而,由于矿井环境光照不足、缺乏纹理等复杂条件的影响,单一传感器的激光SLAM或视觉SLAM方法存在一定的局限性。为了解决这些问题,本文进行了基于视觉、激光和惯性测量单元(Inertial Measurement Unit, IMU)的多传感器数据融合SLAM技术研究。主要研究内容如下:
(1)针对在矿井环境下视觉惯性SLAM初始化时间较长,受光照、运动不足等原因造成初始化失败的问题,本文利用激光雷达获取深度信息精确且快速的特点,借助激光点云数据辅助提取视觉特征深度,提出了一种激光雷达辅助视觉惯性里程计的多传感器融合SLAM初始化算法。在公开数据集环境和矿井环境进行对比实验,实验结果表明:在数据集场景中,本文算法初始化耗时比视觉惯性SLAM初始化算法平均减少4.125秒;在矿井环境中,本文算法初始化耗时比视觉惯性SLAM初始化算法耗时平均减少4.385秒,证明多传感器融合SLAM初始化算法提高了视觉惯性SLAM系统的初始化速度。
(2)针对单一传感器SLAM在矿井光照较弱、缺乏纹理环境中稳定性低,定位精度较差的问题,本文在多传感器数据融合SLAM系统后端部分采用了控制优化规模的滑动窗口算法,视觉与激光双通道回环检测算法以及基于因子图的全局优化算法。在公开数据集环境和矿井环境进行对比实验,实验结果表明:本文提出的多传感器数据融合SLAM比单一传感器的视觉惯性SLAM和激光惯性SLAM具有更好的鲁棒性,更适用于矿井复杂环境;本文提出的多传感器数据融合SLAM和激光惯性SLAM在矿井环境下定位精度相当,优于视觉惯性SLAM,而本文多传感器数据融合SLAM运行更加稳定,运行轨迹更加平滑,体现出本文多传感器数据融合SLAM拥有更好的稳定性。
﹀
|
论文外文摘要: |
︿
With the rapid development of the Internet of Things, artificial intelligence and big data technology, the construction of intelligent mines and digital mines has become a research hotspot within the mining industry. Simultaneous Localization and Mapping(SLAM) technology can provide the foundation for intelligent mine construction by using robots instead of humans to perform localization and environmental exploration in mine tunnels. However, there are limitations of single-sensor LiDAR SLAM or visual SLAM methods due to complex conditions such as insufficient light and lack of texture in the mine environment. In order to solve these problems, this paper conducts a research on SLAM technology based on vision, LiDAR and inertial measurement unit(IMU) for multi-sensor data fusion. The main research contents are as follows:
(1)Aiming at the problem of long initialization time of visual inertial SLAM in the mine environment, which is caused by the failure of initialization due to insufficient light and motion, this paper proposes a multi-sensor fusion SLAM initialization algorithm of LiDAR-assisted visual inertial odometry with the help of LiDAR point cloud data to assist in extracting the depth of visual features by using the characteristics of LiDAR to obtain depth information accurately and quickly. The experimental results show that the initialization time of the algorithm in this paper is 4.125 seconds less than that of the visual inertial SLAM initialization algorithm in the dataset scene, and 4.385 seconds less than that of the visual inertial SLAM initialization algorithm in the mine environment. The fast initialization algorithm of this paper proved to improve the initialization speed of visual inertial SLAM system.
(2)To address the problems of low stability and poor localization accuracy of single-sensor SLAM in the weak light and lack of texture environment in the mine, this paper adopts a sliding window algorithm to control the optimized scale, a visual and LiDAR dual-channel loopback detection algorithm and a global optimization algorithm based on factor graph in the back-end part of the multi-sensor data fusion SLAM system. The comparison experiments are conducted in the open data set environment and the mine environment, and the experimental results show that the multi-sensor data fusion SLAM proposed in this paper has better robustness than the single-sensor visual inertial SLAM and LiDAR inertial SLAM and is more applicable to the complex environment of the mine; The multi-sensor data fusion SLAM and LiDAR inertial SLAM proposed in this paper have comparable localization accuracy in the mine environment, which is better than the visual inertial SLAM, while the multi-sensor data fusion SLAM in this paper operates more stably and has smoother trajectory, reflecting that the multi-sensor data fusion SLAM in this paper has better stability.
﹀
|
参考文献: |
︿
[1] 丁恩杰, 俞啸, 夏冰, 等. 矿山信息化发展及以数字孪生为核心的智慧矿山关键技术[J]. 煤炭学报, 2022, 47(01): 564-578. [2] 葛世荣, 胡而已, 李允旺. 煤矿机器人技术新进展及新方向[J]. 煤炭学报, 2023, 48(01): 54-73. [3] 孙继平, 江嬴. 矿井车辆无人驾驶关键技术研究[J]. 工矿自动化, 2022, 48(05): 1-5+31. [4] 马志艳, 邵长松, 杨光友, 等. 同步定位与建图技术研究进展[J]. 电光与控制, 2023, 30(03): 78-85+106. [5] 邓世燕. 基于多传感器融合的SLAM及演示验证系统实现[D]. 成都: 电子科技大学, 2021. [6] 毛军, 付浩, 褚超群, 等. 惯性/视觉/激光雷达SLAM技术综述[J]. 导航定位与授时, 2022, 9(04): 17-30. [7] 王海军, 曹云, 王洪磊. 煤矿智能化关键技术研究与实践[J]. 煤田地质与勘探, 2023, 51(01): 44-54. [8] 阴贺生, 裴硕, 徐磊, 等. 多机器人视觉同时定位与建图技术研究综述[J]. 机械工程学报, 2022, 58(11): 11-36. [9] 曾庆化, 罗怡雪, 孙克诚, 等. 视觉及其融合惯性的SLAM技术发展综述[J]. 南京航空航天大学学报, 2022, 54(06): 1007-1020. [10] 王文森, 黄凤荣, 王旭, 等. 基于深度学习的视觉惯性里程计技术综述[J]. 计算机科学与探索, 2023, 17(03): 549-560. [11] 吴斌, 丁钰超, ABLABasri. 自动导引车与机器集成调度问题研究现状[J]. 计算机工程与应用, 2023, 59(06): 1-12. [12] Anderson S, Barfoot T D. RANSAC for motion-distorted 3D visual sensors[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013: 2093-2099. [13] Behley J, Stachniss C. Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments[C]//Robotics: Science and Systems. 2018, 2018: 59. [14] Harris C, Stephens M. A combined corner and edge detector[C]//Alvey vision conference. 1988, 15(50): 10-5244. [15] Rosten E, Drummond T. Machine learning for high-speed corner detection[C]//Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9. Springer Berlin Heidelberg, 2006: 430-443. [16] Shi J , Tomasi C. Good features to track[C]//1994 Proceedings of IEEE conference on computer vision and pattern recognition. IEEE, 1994: 593-600. [17] Bay H, Tuytelaars T, Van Gool L. Surf: Speeded up robust features[J]. Lecture notes in computer science, 2006, 3951: 404-417. [18] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision, 2004, 60: 91-110. [19] Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alternative to SIFT or SURF[C]//2011 International conference on computer vision. IEEE, 2011: 2564-2571. [20] Calonder M, Lepetit V, Strecha C, et al. Brief: Binary robust independent elementary features[C]//Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV 11. Springer Berlin Heidelberg, 2010: 778-792. [21] Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision[C]//IJCAI'81: 7th international joint conference on Artificial intelligence. 1981, 2: 674-679. [22] Horn B K P, Schunck B G. Determining optical flow[J]. Artificial intelligence, 1981, 17(1-3): 185-203. [23] 杨观赐, 王霄远, 蒋亚汶, 等. 视觉与惯性传感器融合的SLAM技术综述[J]. 贵州大学学报(自然科学版), 2020, 37(06): 1-12. [24] 杨子寒, 赖际舟, 吕品, 等. 一种可参数自标定的鲁棒视觉/惯性定位方法[J]. 仪器仪表学报, 2021, 42(07): 259-267. [25] Luo W, Xiong Z, Xing L, et al. An IMU/Visual Odometry Integrated Navigation Method Based on Measurement Model optimization[C]//2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC). IEEE, 2018: 1-5. [26] Weiss S M. Vision based navigation for micro helicopters[D]. ETH Zurich, 2012. [27] Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2015: 298-304. [28] Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. [29] Alismail H, Browning B, Lucey S. Robust tracking in low light and sudden illumination changes[C]//2016 Fourth International Conference on 3D Vision (3DV). IEEE, 2016: 389-398. [30] Qin T, Cao S, Pan J, et al. A general optimization-based framework for global pose estimation with multiple sensors[J]. arXiv preprint arXiv:1901.03642, 2019. [31] Campos C, Elvira R, Rodríguez J J G, et al. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. [32] 曹力科, 肖晓晖. 基于卷帘快门RGB-D相机的视觉惯性SLAM方法[J]. 机器人, 2021, 43(02): 193-202. [33] Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping with rao-blackwellized particle filters[J]. IEEE transactions on Robotics, 2007, 23(1): 34-46. [34] Kohlbrecher S, Von Stryk O, Meyer J, et al. A flexible and scalable SLAM system with full 3D motion estimation[C]//2011 IEEE international symposium on safety, security, and rescue robotics. IEEE, 2011: 155-160. [35] Hess W, Kohler D, Rapp H, et al. Real-time loop closure in 2D LIDAR SLAM[C]//2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016: 1271-1278. [36] Zhang J, Singh S. LOAM: Lidar odometry and mapping in real-time[C]//Robotics: Science and Systems. 2014, 2(9): 1-9. [37] Wang H, Wang C, Chen C L, et al. F-loam: Fast lidar odometry and mapping[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021: 4390-4396. [38] Neuhaus F, Ko T, Kohnen R, et al. Mc2slam: Real-time inertial lidar odometry using two-scan motion compensation[C]//Pattern Recognition: 40th German Conference, GCPR 2018, Stuttgart, Germany, October 9-12, 2018, Proceedings 40. Springer International Publishing, 2019: 60-72. [39] Ye H, Chen Y, Liu M. Tightly coupled 3d lidar inertial odometry and mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 3144-3150. [40] Shan T, Englot B, Meyers D, et al. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping[C]//2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2020: 5135-5142. [41] Li K, Li M, Hanebeck U D. Towards high-performance solid-state-lidar-inertial odometry and mapping[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 5167-5174. [42] Zhang J, Singh S. Visual-lidar odometry and mapping: Low-drift, robust, and fast[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015: 2174-2181. [43] Shao W, Vijayarangan S, Li C, et al. Stereo visual inertial LiDAR simultaneous localization and mapping[C]//Proceedings of 2019 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS). Macau, SAR, China. IEEE, 2019: 370-377. [44] Zhao S, Zhang H, Wang P, et al. Super odometry: IMU-centric LiDAR-visual-inertial estimator for challenging environments[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021: 8729-8736. [45] Yang Y, Geneva P, Zuo X, et al. Tightly-coupled aided inertial navigation with point and plane features[C]//Proceedings of International Conference on Robotics and Automation (ICRA). Montreal, QC, Canada, IEEE, 2019: 6094-6100. [46] Zuo X, Geneva P, Lee W, et al. LIC-Fusion: LiDAR-inertia-camera odometry[C] //Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robotsand Systems(IROS). Macau, SAR, China. IEEE, 2019: 5848-5854. [47] Shan T, Englot B, Ratti C, et al. Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping[C]//2021 IEEE international conference on robotics and automation (ICRA). IEEE, 2021: 5692-5698. [48] Lin J, Zhang F. R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package[C]//2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022: 10672-10678. [49] Zheng C, Zhu Q, Xu W, et al. FAST-LIVO: Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry[C]//2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022: 4003-4009. [50] 周治国, 曹江微, 邸顺帆. 3D激光雷达SLAM算法综述[J]. 仪器仪表学报, 2021, 42(09): 13-27. [51] Grater J, Wilczynski A, Lauer M. LIMO: lidar-monocular visual odometry[C]// Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, Spain. IEEE. 2018: 7872-7879. [52] Zuo X, Yang Y, Geneva P, et al. LIC-Fusion 2. 0: LiDAR-inertia-camera odometry with sliding-window plane-feature tracking[C]//Proceedings of IEEE/RSJ International Conference on Intelligent Robotsand Systems(IROS). Las Vegas, NV, USA. IEEE. 2020: 5112-5119. [53] Shin Y S,Park Y S,Kim A. DVL-SLAM: sparsedepth enhanced direct visual-LiDAR SLAMLJ]. Autonomous Robots, 2020, 44(5): 115-130. [54] Wisth D, Camurri M, Das S, et al. Unified multimodal landmark tracking for tightly coupled lidar-visual-inertial odometry[J]. IEEE Robotics and Auto-mation Letters, 2021, 6(2): 1004-1011. [55] 董一鸣. 基于混合固态激光雷达的车辆信息提取与类型识别研究[D]. 南京: 南京信息工程大学, 2022. [56] Lynen S, Achtelik M W, Weiss S, et al. A robust and modular multi-sensor fusion approach applied to mav navigation[C]//2013 IEEE/RSJ international conference on intelligent robots and systems. IEEE, 2013: 3923-3929. [57] Forster C, Carlone L, Dellaert F, et al. On-manifold preintegration for real-time visual-inertial odometry[J]. IEEE Transactions on Robotics, 2016, 33(1): 1-21. [58] Aggarwal P, Syed Z, Niu X, et al. A standard testing and calibration procedure for low cost MEMS inertial sensors and units[J]. The Journal of Navigation, 2008, 61(2): 323-336. [59] Furgale P, Rehder J, Siegwart R. Unified temporal and spatial calibration for multi-sensor systems[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013: 1280-1286. [60] Maye J, Furgale P, Siegwart R. Self-supervised calibration for robotic systems[C]//2013 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2013: 473-480. [61] Oth L, Furgale P, Kneip L, et al. Rolling shutter camera calibration[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2013: 1360-1367. [62] 卢陶然. 面向无人机的视觉-惯性里程计算法研究[D]. 成都: 电子科技大学, 2020. [63] 高翔. 视觉SLAM十四讲: 从理论到实践[M]. 北京: 电子工业出版社, 2017.3. [64] 张保军. 基于旋量理论和李群—李代数的空间机械臂动力学建模与分析[D]. 长沙: 国防科技大学, 2017. [65] Xu D, Li Y F, Tan M. A general recursive linear method and unique solution pattern design for the perspective-n-point problem[J]. Image and Vision Computing, 2008, 26(6): 740-750. [66] Hartley R, Zisserman A. Multiple view geometry in computer vision[M]. Cambridge university press, 2003. [67] Rosten E, Drummond T. Machine learning for high-speed corner detection[C]//Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9. Springer Berlin Heidelberg, 2006: 430-443. [68] Zhang N, Zhao Y. Fast and robust monocular visua-inertial odometry using points and lines[J]. Sensors, 2019, 19(20): 4545. [69] Forster C, Carlone L, Dellaert F, et al. On-manifold preintegration theory for fast and accurate visual-inertial navigation[J]. IEEE Transactions on Robotics, 2015: 1-18. [70] Forster C, Carlone L, Dellaert F, et al. On-manifold preintegration for real-time visual-inertial odometry[J]. IEEE Transactions on Robotics, 2016, 33(1): 1-21. [71] 刘刚, 葛洪伟. 视觉惯导SLAM初始化算法研究[J]. 计算机科学与探索, 2021, 15(08): 1546-1554. [72] Zhang J, Singh S. Low-drift and real-time lidar odometry and mapping[J]. Autonomous Robots, 2017, 41(2): 401-416. [73] Shan T, Englot B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 4758-4765. [74] 杨雪梅, 李帅永. 移动机器人视觉SLAM回环检测原理、现状及趋势[J]. 电子测量与仪器学报, 2022, 36(08): 1-12. [75] Yengen Z U, Sezer V. A novel data association technique to improve run-time efficiency of SLAM algorithms[J]. Eskişehir Technical University Journal of Science and Technology A-Applied Sciences and Engineering, 2019, 20(2): 179-194. [76] 高源东. 移动机器人视觉SLAM回环检测技术研究[D]. 南京: 东南大学, 2020. [77] Arthur D, Vassilvitskii S. K-means++ the advantages of careful seeding[C]//Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms. 2007: 1027-1035. [78] Sivic J, Zisserman A. Video Google: A text retrieval approach to object matching in videos[C]//Computer Vision, IEEE International Conference on. IEEE Computer Society, 2003: 1470-1477. [79] Kaess M, Johannsson H, Roberts R, et al. iSAM2: Incremental smoothing and mapping with fluid relinearization and incremental variable reordering[C]//2011 IEEE International Conference on Robotics and Automation. IEEE, 2011: 3281-3288. [80] Dellaert F, Kaess M. Square Root SAM: Simultaneous localization and mapping via square root information smoothing[J]. The International Journal of Robotics Research, 2006, 25(12): 1181-1203. [81] Yin J, Li A, Li T, et al. M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots[J]. IEEE Robotics and Automation Letters, 2021, 7(2): 2266-2273.
﹀
|
中图分类号: |
P232 TP242.2
|
开放日期: |
2023-06-16
|