- 无标题文档
查看论文信息

论文中文题名:

 基于多传感器融合的移动机器人建图定位技术研究    

姓名:

 吴栋    

学号:

 19207107004    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 080902    

学科名称:

 工学 - 电子科学与技术(可授工学、理学学位) - 电路与系统    

学生类型:

 硕士    

学位级别:

 工学硕士    

学位年度:

 2022    

培养单位:

 西安科技大学    

院系:

 通信与信息工程学院    

专业:

 电路与系统    

研究方向:

 机器人建图与定位    

第一导师姓名:

 朱代先    

第一导师单位:

 西安科技大学    

论文提交日期:

 2022-06-19    

论文答辩日期:

 2022-06-06    

论文外文题名:

 Research on mapping and positioning technology of mobile robot based on multi-sensor fusion    

论文中文关键词:

 同时定位与建图 ; 多传感器融合 ; 惯性测量单元 ; 单目相机 ; 激光雷达    

论文外文关键词:

 SLAM ; Muti-sensor fusion ; IMU ; Monocular camera ; Lidar    

论文中文摘要:

自主移动机器人的核心技术同时定位与地图构建技术,拥有广泛的应用前景,采用单一传感器的SLAM技术在实际场景应用中的鲁棒性仍存在严峻挑战。针对采用单一传感器的SLAM技术在复杂的实际场景中存在鲁棒性较差的问题,在充分考虑各类传感器的特性和适用场景的基础上,对基于多传感器融合的移动机器人建图定位技术展开深入研究,主要研究内容如下:

(1)针对采用单一激光雷达的SLAM算法在定位过程中累计误差较大、点云地图数据体量较大而难以实时闭环检测的问题,改进了基于激光雷达的SLAM算法。通过融合IMU信息对激光雷达采集的点云数据进行去畸变处理,采用正态分布变换算法实现点云的帧间匹配,计算位姿信息。结合Scan Context算法,对累计的点云帧进行降维生成Scan Context序列,与当前帧点云生成的Scan Context序列进行匹配,实现闭环检测,修正累计误差。融合点云帧间匹配的位姿约束和闭环约束,实现位姿优化和点云地图的构建,并使用公开数据集KITTI进行了验证。仿真结果表明所设计的激光雷达和IMU融合的定位与建图算法,在室外城市环境下能够实现实时定位与地图构建及闭环检测,累计误差较小,具有较好的精度和鲁棒性。

(2)针对采用单目相机的SLAM算法在定位过程中存在尺度不确定性、难以应对场景结构退化以及光照变换大引起的定位失败问题,改进了基于单目相机的SLAM算法。设计视觉特征点和特征线结合的特征匹配方法建立图像间的约束,以提高场景结构退化、光照不良情况下算法的鲁棒性。在此基础上,融合IMU信息,采用紧耦合的方式建立视觉特征点线约束和IMU预积分约束,使用基于滑动窗口的关键帧非线性优化算法完成状态估计和误差传播。采用数据集EuRoc进行仿真验证,并搭建硬件平台,分别采用移动机器人搭载设备和手持设备的方式进行室内实验室和模拟煤矿巷道场景的定位实验。仿真与实验结果表明,改进的单目相机和IMU融合的SLAM系统,能够在光照不良的场景下实现精准的自主定位与地图构建。

论文外文摘要:

The core technology of autonomous mobile robot is simultaneous positioning and map construction technology, which has a wide application prospect. The robustness of slam technology using a single sensor in practical scene application still has severe challenges. Aiming at the problem of poor robustness of slam technology using a single sensor in complex actual scenes, on the basis of fully considering the characteristics and applicable scenes of various sensors, this paper makes an in-depth study on the mapping and positioning technology of mobile robot based on multi-sensor fusion. The main research contents are as follows:

(1) Aiming at the problems of large cumulative error and large volume of point cloud map data in the positioning process of SLAM algorithm using a single lidar, which is difficult to detect the loop in real time, the SLAM algorithm based on lidar is improved. The point cloud data collected by lidar is de distorted by fusing IMU information. The normal distribution transformation algorithm is used to realize the inter frame matching of point cloud and calculate the pose information. Combined with the scan context algorithm, the dimension of the accumulated point cloud frame is reduced to generate the scan context sequence, which is matched with the scan context sequence generated by the current frame point cloud to realize closed-loop detection to correct the accumulated error. Integrating the pose constraints and closed-loop constraints of point cloud inter frame matching, the pose optimization and the construction of point cloud map are realized, and verified by using the public data set Kitti. The simulation results show that the designed location and mapping algorithm based on the fusion of lidar and IMU can realize real-time location, map construction and Closed-loop Detection in outdoor urban environment, with small cumulative error and good accuracy and robustness.

(2) Aiming at the problems of scale uncertainty, difficult to deal with the degradation of scene structure and location failure caused by large illumination transformation in the localization process of SLAM algorithm using monocular camera, the SLAM algorithm based on monocular camera is improved. A feature matching method combining visual feature points and feature lines is designed to establish constraints between images, so as to improve the robustness of the system in the case of scene structure degradation and poor illumination. On this basis, the IMU information is fused, the visual feature point line constraint and IMU pre integration constraint are established by tight coupling, and the key frame nonlinear optimization algorithm based on sliding window is used to complete state estimation and error propagation. The data set euroc is used for simulation verification, and the hardware platform is built. The positioning experiments of indoor laboratory and simulated coal mine roadway scene are carried out by means of mobile robot carrying equipment and handheld equipment respectively. Simulation and experimental results show that the improved slam system integrating monocular camera and IMU can achieve accurate autonomous positioning and map construction in poorly illuminated scenes.

参考文献:

[1] Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age[J]. IEEE Transactions on robotics, 2016, 32(6): 1309-1332.

[2] 王霞, 左一凡. 视觉SLAM研究进展[J]. 智能系统学报, 2020, 15 (5): 825-834.

[3] Siegwart R, Nourbakhsh I R, Scaramuzza D. Introduction to autonomous mobile robots[M]. MIT Press, 2011: 89-180.

[4] 左星星. 面向鲁棒和智能化的多源融合SLAM技术研究[D].杭州市: 浙江大学,2021.

[5] 王金科, 左星星, 赵祥瑞, 等. 多源融合SLAM的现状与挑战[J]. 中国图象图形学报, 2022,27(02): 368-389.

[6] Montemerlo M, Thrun S, Koller D, et al. Fast SLAM: a factored solution to the simultaneous localization and mapping problem[C] Proc of AAAI National Conference on Artificial Intelligence. Palo Alto,CA:AAAI press,2002:593-598.

[7] Schon T, Gustafsson F, Nordlund P J. Marginalized particle filters for mixed linear/nonlinear state-space models[J]. IEEE Transactions on signal processing, 2005, 53(7): 2279-2289.

[8] Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping with Rao-Black wellized particle filters[J]. IEEE Trans on Robotics, 2007, 23(1): 34-46.

[9] Hess W, Kohler D, Rapp H, et al. Real-time loop closure in 2D LIDAR SLAM[C]//2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016: 1271-1278.

[10] Zhang J, Singh S. Low-drift and Real-time Lidar Odometry and Mapping[J]. Autonomous Robots, 2017, 41(2):401-416.

[11] Shan T, Englot B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 4758-4765.

[12] Deschaud J E. IMLS-SLAM: Scan-to-model matching based on 3D data[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 2480-2485.

[13] 权美香, 朴松昊, 李国. 视觉SLAM综述[J]. 智能系统学报, 2016, 11(6): 768-776.

[14] 陈黎明. 城市道路环境下无人车自主定位与导航技术研究[D]. 西安市: 西安科技大学, 2021.

[15] Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE transactions on pattern analysis and machine intelligence, 2007, 29(6): 1052-1067.

[16] Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//2007 6th IEEE and ACM international symposium on mixed and augmented reality. IEEE, 2007: 225-234.

[17] Engel J, Schöps T, Cremers D. LSD-SLAM: large-scale direct monocular SLAM[C] //Proceedings of Computer Vision-ECCV 2014. Heidelberg: Springer, 2014: 834-849

[18] Mur-Artal R , Montiel J M M , Tardos J D . ORB-SLAM: A Versatile and Accurate Monocular SLAM System[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163.

[19] Gálvez-López D, Tardos J D. Bags of binary words for fast place recognition in image sequences[J]. IEEE Transactions on Robotics, 2012, 28(5): 1188-1197.

[20] Strasdat H, Montiel J M M, Davison A J. Scale drift-aware large scale monocular slam[C]. Robotics: Science and Systems (RSS), 2010: 1-8.

[21] Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.

[22] Mur-Artal R, Tardós J D. ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE transactions on robotics, 2017, 33(5): 1255-1262.

[23] Li J , Yang B , Huang K , et al. Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors . PRCV 2019, LNCS 11859, 2019, 283–295.

[24] Mourikis A I , Roumeliotis S I . A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation[C]// Robotics and Automation, IEEE, 2007:3565-3572.

[25] Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.

[26] Forster C, Carlone L, Dellaert F, et al. IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation[C]. Robotics: Science and Systems (RSS), 2015: 1-10.

[27] Forster C, Carlone L, Dellaert F, et al. On-manifold preintegration for real-time visual-inertial odometry[J]. IEEE Transactions on Robotics, 2017, 33 (1): 1-21.

[28] Campos C, Elvira R, Rodríguez J J G, et al. ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap slam[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.

[29] Coulin J, Guillemard R, Gay-Bellile V, et al. Tightly-Coupled Magneto-Visual-Inertial Fusion for Long Term Localization in Indoor Environment[J]. IEEE Robotics and Automation Letters, 2021, 7(2): 952-959.

[30] Park C, Moghadam P, Kim S, et al. Elastic lidar fusion: Dense map-centric continuous-time slam[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1206-1213.

[31] Geneva P, Eckenhoff K, Yang Y, et al. Lips: Lidar-inertial 3d plane slam[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 123-130.

[32] Ye H, Chen Y, Liu M. Tightly coupled 3d lidar inertial odometry and mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 3144-3150.

[33] Shan T, Englot B, Meyers D, et al. LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020: 5135-5142.

[34] Ding W, Hou S, Gao H, et al. Lidar inertial odometry aided robust lidar localization system in changing city scenes[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020: 4322-4328.

[35] Zhang J, Singh S. Visual-lidar odometry and mapping: Low-drift, robust, and fast[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015: 2174-2181.

[36] Shan T, Englot B, Ratti C, et al. LVI-SAM: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping[C]//2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 5692-5698.

[37] Bao H, Xie W, Qian Q, et al. Robust Tightly-Coupled Visual-Inertial Odometry with Pre-built Maps in High Latency Situations[J]. IEEE Transactions on Visualization and Computer Graphics, 2022, 28(5): 2212-2222.

[38] Besl P J, McKay N D. Method for registration of 3-D shapes[C]//Sensor fusion IV: control paradigms and data structures. Spie, 1992, 1611: 586-606.

[39] Geiger A, Lenz P, Stiller C, et al. Vision meets robotics: The kitti dataset[J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237.

[40] Sturm J, Engelhard N, Endres F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//2012 IEEE/RSJ international conference on intelligent robots and systems. IEEE, 2012: 573-580.

[41] 高翔, 张涛. 视觉SLAM十四讲: 从理论到实践[M]. 北京: 电子工业出版社, 2019: 89.

[42] Shen S, Mulgaonkar Y, Michael N, et al. Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft MAV[C]. //2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014:4974-4981.

[43] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision, 2004, 60(2): 91-110.

[44] Bay H, Tuytelaars T, Gool L V. SURF: Speeded up robust features[C]//European conference on computer vision. Springer, Berlin, Heidelberg, 2006: 404-417.

[45] Alcantarilla P, Nuevo J, Bartoli A. Fast explicit diffusion for accelerated features in nonlinear scale spaces[C].// British Machine Vision Conference. 2013:341-349.

[46] Harris C G, Stephens M. A combined corner and edge detector[C]//Alvey vision conference. 1988, 15(50): 10-5244.

[47] Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alternative to SIFT or SURF[C]//2011 International conference on computer vision. IEEE, 2011: 2564-2571.

[48] 朱代先, 吴栋, 刘树林, 等. AKAZE结合自适应局部仿射匹配的视差图像特征匹配算法[J]. 应用光学, 2021, 42(06): 1048-1055.

[49] Calonder M, Lepetit V, Strecha C, et al. Brief: Binary robust independent elementary features[C]//European conference on computer vision. Springer, Berlin, Heidelberg, 2010: 778-792.

[50] Gioi R G V, Jakubowicz J, Morel J M, et al. LSD: A Fast Line Segment Detector with a False Detection Control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010,32(4):722-732.

[51] Zhang L, Koch R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of Visual Communication and Image Representation, 2013,24(7):794-805.

[52] 权美香. 基于多传感器信息融合的单目视觉SLAM算法研究[D].哈尔滨市: 哈尔滨工业大学, 2021.

[53] Burri M, Nikolic J, Gohl P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10): 1157-1163.

中图分类号:

 TP242.6    

开放日期:

 2022-06-21    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式