- 无标题文档
查看论文信息

论文中文题名:

 单目视觉惯导与激光雷达多模态鲁棒定位和一致建图SLAM算法研究    

姓名:

 王岩    

学号:

 16105301002    

保密级别:

 保密(1年后开放)    

论文语种:

 chi    

学科代码:

 080202    

学科名称:

 工学 - 机械工程 - 机械电子工程    

学生类型:

 博士    

学位级别:

 工学博士    

学位年度:

 2022    

培养单位:

 西安科技大学    

院系:

 机械工程学院    

专业:

 机械工程    

研究方向:

 机器人技术    

第一导师姓名:

 马宏伟    

第一导师单位:

 西安科技大学    

论文提交日期:

 2023-01-10    

论文答辩日期:

 2022-12-06    

论文外文题名:

 Research on the SLAM Algorithm of Multimodal Monocular Visual-Inertial-LiDAR Robust Localization and Consistent Mapping    

论文中文关键词:

 移动机器人导航 ; 同步定位与建图 ; 多模态 ; 单目视觉 ; 惯性测量单元 ; 激光雷达移动机器人导航 ; 同步定位与建图 ; 多模态 ; 单目视觉 ; 惯性测量单元 ; 激光雷达    

论文外文关键词:

 Mobile Robot Navigation ; Simultaneous Localization and Mapping ; Multimodal ; Monocular Visual ; Inertial Measurement Unit ; LiDAR    

论文中文摘要:

同步定位与建图(Simultaneous Localization and Mapping, SLAM)是移动机器人自主导航的核心技术。为了解决当前SLAM算法在传感器退化场景下的定位鲁棒性不足以及大尺度高度变化场景的建图一致性难题,构建由单目相机、六轴低精度惯导(MEMS-IMU)、机械旋转式三维激光雷达,三种传感器组成的mVIL(monocular Visual-Inertial-LiDAR)多模态感知系统。开展单模态数据关联机理、在线全自动时空初始化、局部混耦合双滑窗优化与全局增量平滑优化的mVIL-SLAM算法研究,为同时融合视觉惯导与激光雷达的多模态同步定位与建图问题提供一种完整的解决方案,提升移动机器人的导航定位建图水平。

研究三种传感器的单模态数据关联机理。针对多模态系统初始化外参估计时,单目视觉位姿初值计算需求,基于运动结构恢复(Structure From Motion, SFM)构建了一种线性数值计算与非线性光束平差(Bundle Adjustment, BA)优化的多视图几何快速测量算法,通过求解固定规模的单目相机位姿及其观测特征点三维坐标,实现了快速的相机姿态初始估计,实验中指出单目相机存在尺度飘移问题,为多模态初始化算法中引入尺度变量提供了理论基础;针对融合惯导后的多模态SLAM数学建模在不同领域存在多种定义问题,约定了全文一致规约,构建了本文低精度IMU简化模型下的离散预积分测量模型,推导了李群空间右扰动模型下的预积分误差传递模型;针对现有激光雷达数据关联算法中反射强度信息的缺失,提出一种带有反射强度校验的激光雷达特征提取与匹配方法,该方法在微小增加计算量的前提下,有效提升了特征提取与匹配的准确性,对位姿估计提供了正向促进作用。

针对目前多模态SLAM初始化算法外参标定与时间同步功能不完善,面对传感器二次布放时需离线重复标定外参以及当系统无硬件时间同步时难以初始化等问题,提出了一种在线全自动时空初始化mVIL-INIT算法。该算法无需任何环境先验证知识,无需额外标志物,无需特殊运动模式,可实现mVIL系统初始化外参自标定与时间软同步。针对同时融合三种传感器时的初始化状态量多,且具有捆绑效应难题,提出依次进行视觉惯性VI初始化与激光惯性LI初始化的两阶段初始化方法。首先利用旋转约束在表征状态空间进行线性求解,得到旋转外参初值,随后在真实状态空间下利用旋转与平移约束执行非线性优化,以由粗到精的方式有效解决了初始化状态量的捆绑效应。同时,提出的初始化算法考虑了更加全面的状态空间并加以约束计算,使mVIL系统在不同场景、不同安装方式下均可实现鲁棒且快速的初始化。在公共数据集与自制数据集下详细测评了mVIL-INIT算法,验证了其高效性与鲁棒性。

针对传统滤波框架多模态SLAM算法面对强非线性状况时的鲁棒性不足,传统优化框架深度紧耦合计算量大且算法扩展性差等问题,提出一种面向局部鲁棒定位的结合松耦合与紧耦合特性的多模态混耦合mVIL-OAM算法。算法前端为以松耦合方式利用视觉惯性里程计(Visual-Inertial-Odometry, VIO)插值预测,得到激光雷达里程计(LiDAR-Odometry, LO),同时利用VIO去除激光点云畸变,并在单位球面与视觉特征点深度关联;算法后端提出一种紧耦合双滑窗优化模型,将激光雷达深度约束、帧间约束(VIO状态监测功能)、帧图约束(局部地图构建功能)以伴随滑窗的方式引入视惯主滑窗,增强了算法前端的鲁棒性与精度。mVIL-OAM算法前后端鲁棒性与精度均优于类似架构的单模态或多模态SLAM系统,综合了三种传感器优势,具备视觉/激光退化场景下的鲁棒定位能力。

针对现有激光雷达建图算法高度易飘移、闭环难检测等问题,在mVIL-OAM基础上提出一种面向全局一致建图的多模态增量平滑mVIL-SAM算法。以mVIL-OAM多帧结果合并作为关键帧,以轨迹、高度与扫描语境Scan-Context三种方式进行闭环检测,可实现高度变化场景的稳定闭环。mVIL-SAM以因子图增量平滑的方式更新后端位姿,因子图包含初始因子、高度先验因子、关联里程因子、闭环因子,其中高度先验来自于视觉惯性前端。全局地图构建时针对室内外场景特点分别采用ikdTree与Octotree数据结构维护。mVIL-SAM以前、中、后端的三级式算法架构,实现了全局定位与建图的一致性,相较于现有面向全局的开源多模态SLAM算法具备高度变化场景下的一致建图优势。

最后,自主搭建移动机器人平台,评价提出的定位建图算法在移动机器人导航时的实际表现。在煤矿模拟实验室中进行了飞行机器人导航在线定位实验,结果表明提出的mVIL-OAM算法具备局部厘米级导航在线定位能力。在校园环境中进行了轮式机器人导航在线建图实验,结果表明提出的mVIL-SAM算法具备全局分米级一致建图能力。

论文外文摘要:

Simultaneous Localization and Mapping (SLAM) is the core technology for autonomous navigation of mobile robots. In order to solve the problems that the current SLAM algorithm has insufficient robustness of localization in the sensor degradation scenario and the difficulty of guaranteeing the consistency of mapping in the large scale height change scenario, the mVIL (monocular Visual-Inertial-LiDAR) multimodal sensing system, consisting of a monocular camera, six-axis low-precision MEMS-IMU, and mechanical rotational 3D LiDAR, three sensors, is constructed. The unimodal data correlation mechanism, online fully automatic spatio-temporal initialization, local mixed-coupling dual sliding window optimisation, and global incremental smoothing optimization algorithm are studied, which provides a complete solution to the multimodal simultaneous localization and mapping problem of integrating Visual-Inertial-LiDAR at the same time, and improves the navigation ability of mobile robots. 

The three single-modal data association models used in this dissertation are studied and constructed. First, for initial external parameter estimation of multimodal systems, a linear numerical computation and nonlinear Bundle Adjustment (BA) optimized multi-view geometry measurement algorithm based on Structure From Motion (SFM) is constructed to address the need for initial monocular visual pose calculation. The algorithm achieves fast initial estimation of the camera pose by solving the fixed-scale monocular camera poses and its observed features. The experimental results show that the monocular camera has a scale drift problem, which provides a theoretical basis for introducing scale variables in the multimodal initialization algorithm. To address the problem of multiple definitions in different domains in the mathematical modeling of multimodal SLAM after fused IMU measurement, a full-text consistent statute has been agreed. A discrete pre-integrated measurement model under the simplified model of low-precision IMU in this dissertation is constructed, and a pre-integrated error transfer model under the right perturbation model in Lie group space is derived. Finally, a LiDAR feature extraction and matching method with reflection intensity check is proposed for the lack of reflection intensity information in the existing LiDAR data association algorithm, which effectively improves the accuracy of feature extraction and matching with a small increase in computational effort while providing a positive contribution to the LiDAR state estimation.

For the existing multimodal SLAM initialization algorithm does not yet have a complete external parameter calibration and time synchronization function, when facing the second deployment of sensors, the external parameters need to be recalibrated offline; when the system does not have hardware time synchronization, it is difficult to initialize; and other dilemmas, an online initialization algorithm is proposed: mVIL-INIT. The algorithm requires no prior knowledge of the environment, no additional markers, no special motion patterns, and is the first online initialization algorithm for mVIL systems with external calibration and soft time synchronization. In order to solve the problems of the initialization state number being large and the bundling effect when fusing three sensors, we propose to complete the initialization in two stages: Visual-Inertial (VI) initialization and LiDAR Inertial (LI) initialization. In the two stages, we first use the rotation constraint to perform the linear solution in the representation state space to obtain the initial value of the rotation external parameter, and then use the rotation and translation constraints to perform the nonlinear optimization in the real state space, which can effectively solve the bundling effect of the initialization states in a coarse-to-fine way. At the same time, the proposed initialization algorithm considers a fuller set of state quantities and constrains the computation so that the mVIL system can achieve robust and fast initialization in different scenarios and different installation positions. The mVIL-INIT algorithm's effectiveness, efficiency, and robustness are thoroughly assessed on both public and self-collected datasets.

In response to the current filtering framework multimodal SLAM algorithm's lack of robustness in the face of strong nonlinear conditions, as well as the optimization framework's computationally intensive and poorly scalable depth of tight coupling, a locally oriented mixed-coupling multimodal SLAM algorithm combining loose and tight coupling is proposed: mVIL-OAM. The frontend of the algorithm uses Visual-Inertial-Odometry (VIO) interpolation predictions to obtain LiDAR Odometry (LO) in a loosely coupled manner, and uses VIO to remove LiDAR point cloud distortion. Then, correlate the undistorted point cloud with visual feature points in depth on the unit sphere. A tightly coupled dual sliding window optimization model is proposed at the back-end of the algorithm, which introduces LiDAR depth constraint, scan-to-scan constraint (VIO status monitoring function), and scan-to-map constraint (local mapping function) into the main visual-inertial sliding window in the form of friend sliding windows to enhance the robustness and accuracy of the front-end algorithm. mVIL-OAM algorithm front-end and back-end robustness and accuracy are better than those of singlemodal or multimodal SLAM systems with similar architectures. The system combines the advantages of the three sensors' measurements and has the ability to localize and map in visually degraded and LiDAR-degraded scenes.

In response to the problems of existing LiDAR mapping algorithms such as altitude drift and difficult detection of loop closing, a global multimodal SLAM algorithm, mVIL-SAM, with multi-layer and globally consistent real-time mapping and localization capability, is proposed based on mVIL-OAM. The algorithm uses mVIL-OAM multi-frame results merged as the key frame to perform loop-closure detection in three ways: trajectory, altitude, and environment descriptor, which can achieve stable closed-loop for altitude change scenes; mVIL-SAM updates the back-end poses by incremental smoothing of the pose-only factor graph, which contains an initial factor, an altitude prior factor, a between factor, and a loop factor, where the altitude prior comes from the front-end VIO. The global map is constructed using ikdTree and Octotree data structures for indoor and outdoor scenes, respectively. mVIL-SAM has a three-level algorithm architecture of frontend, midend, and backend that achieves consistent global localization and mapping, which has advantages over current global-oriented open source multimodal SLAM algorithms in highly variable scenarios.

Finally, the mobile robot platforms were self-assambled to evaluate the practical performance of the proposed SLAM algorithm for online navigation of the mobile robot. Experiments on online localization for flying robot navigation were conducted in a coal mine simulation laboratory, and the results show that the proposed mVIL-OAM algorithm possesses local centimeter-level navigation online localization capability. Experiments on online mapping for wheeled robot navigation were conducted in a campus environment, and the results show that the mVIL-SAM algorithm has a global sub-meter level consistent mapping capability.

参考文献:

[1]Durrant-Whyte H, Bailey T. Simultaneous localization and mapping: part I[J]. IEEE robotics & automation magazine, 2006, 13(2): 99-110.

[2]Bailey T, Durrant-Whyte H. Simultaneous localization and mapping (SLAM): Part II[J]. IEEE robotics & automation magazine, 2006, 13(3): 108-117.

[3]杨元喜. 北斗卫星导航系统的进展、贡献与挑战[J]. 测绘学报, 2010,39(01):1-6.

[4]Zhu L, Sun S, Menzel W. Ultra-wideband (UWB) bandpass filters using multiple-mode resonator[J]. IEEE Microwave and Wireless components letters, 2005, 15(11): 796-798.

[5]Merriaux P, Dupuis Y, Boutteau R, et al. A study of vicon system positioning performance[J]. Sensors, 2017, 17(7): 1591.

[6]秦永元. 惯性导航[M]. 科学出版社, 2014.

[7]严恭敏. 惯性仪器测试与数据分析[M]. 国防工业出版社, 2012.

[8]Jingye C, Yaocheng S. Research progress in solid-state LiDAR[J]. Opto-Electronic Engineering, 2019, 46(7): 190218-1-190218-11.

[9]Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age[J]. IEEE Transactions on Robotics, 2016, 32(6): 1309-1332.

[10]Aulinas J, Petillot Y, Salvi J, et al. The SLAM problem: a survey[J]. Artificial Intelligence Research and Development, 2008: 363-371.

[11]Grisetti G, Kümmerle R, Stachniss C, et al. A tutorial on graph-based SLAM[J]. IEEE Intelligent Transportation Systems Magazine, 2010, 2(4): 31-43.

[12]Gustafsson F. Particle filter theory and practice with positioning applications[J]. IEEE Aerospace and Electronic Systems Magazine, 2010, 25(7): 53-82.

[13]G. Dissanayake, S. Huang, Z. Wang and R. Ranasinghe. A review of recent developments in Simultaneous Localization and Mapping[C]//2011 6th International Conference on Industrial and Information Systems. IEEE, 2011: 477-482.

[14]Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 1052-1067.

[15]Schonberger J L, Frahm J M. Structure-from-motion revisited[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 4104-4113.

[16]Shi J. Good features to track[C]//1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1994: 593-600.

[17]陈松柏.实时的归一化相关匹配算法[J].信息与电子工程,2006(06):461-463.

[18]Strasdat H, Montiel J M M, Davison A J. Visual SLAM: why filter?[J]. Image and Vision Computing, 2012, 30(2): 65-77.

[19]Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. IEEE, 2007: 225-234.

[20]Nistér D. An efficient solution to the five-point relative pose problem[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(6): 756-770.

[21]Bustos A P, Chin T J, Eriksson A, et al. Visual SLAM: Why bundle adjust?[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 2385-2391.

[22]Trajković M, Hedley M. Fast corner detection[J]. Image and vision computing, 1998, 16(2): 75-87.

[23]Barnes C, Shechtman E, Finkelstein A, et al. PatchMatch: A randomized correspondence algorithm for structural image editing[J]. ACM Trans. Graph., 2009, 28(3): 24.

[24]Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.

[25]Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alternative to SIFT or SURF[C]//2011 International Conference on Computer Vision. Ieee, 2011: 2564-2571.

[26]Strasdat H. Local accuracy and global consistency for efficient visual SLAM[D]. Department of Computing, Imperial College London, 2012.

[27]Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.

[28]Barfoot T D. State estimation for robotics[M]. Cambridge University Press, 2017.

[29]高翔, 汽车工程. 视觉 SLAM 十四讲: 从理论到实践[M]. 电子工业出版社, 2017.

[30]Newcombe R A, Lovegrove S J, Davison A J. DTAM: Dense tracking and mapping in real-time[C]//2011 International Conference on Computer Vision. IEEE, 2011: 2320-2327.

[31]Civera J, Davison A J, Montiel J M M. Inverse depth parametrization for monocular SLAM[J]. IEEE Transactions on Robotics, 2008, 24(5): 932-945.

[32]Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European Conference on Computer Vision. Springer, Cham, 2014: 834-849.

[33]Engel J, Sturm J, Cremers D. Semi-dense visual odometry for a monocular camera[C]//Proceedings of the IEEE International Conference on Computer Vision. 2013: 1449-1456.

[34]Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(3): 611-625.

[35]Engel J, Usenko V, Cremers D. A photometrically calibrated benchmark for monocular visual odometry[J]. arXiv preprint arXiv:1607.02555, 2016.

[36]Li M, Mourikis A I. High-precision, consistent EKF-based visual-inertial odometry[J]. The International Journal of Robotics Research, 2013, 32(6): 690-711..

[37]Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 15-22.

[38]Forster C, Zhang Z, Gassner M, et al. SVO: Semidirect visual odometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, 2016, 33(2): 249-265.

[39]Besl P J, McKay N D. Method for registration of 3-D shapes[C]//Sensor fusion IV: control paradigms and data structures. Spie, 1992, 1611: 586-606.

[40]Biber P, Straßer W. The normal distributions transform: A new approach to laser scan matching[C]//Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453). IEEE, 2003, 3: 2743-2748.

[41]Low K L. Linear least-squares optimization for point-to-plane icp surface registration[J]. Chapel Hill, University of North Carolina, 2004, 4(10): 1-3.

[42]Segal A, Haehnel D, Thrun S. Generalized-icp[C]//Robotics: Science and Systems. 2009, 2(4): 435.

[43]Koide K, Yokozuka M, Oishi S, et al. Voxelized gicp for fast and accurate 3d point cloud registration[C]//2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 11054-11059.

[44]Koide K, Miura J, Menegatti E. A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement[J]. International Journal of Advanced Robotic Systems, 2019, 16(2): 1729881419841532.

[45]Yokozuka M, Koide K, Oishi S, et al. LiTAMIN: LiDAR-based tracking and mapping by stabilized ICP for geometry approximation with normal distributions[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020: 5143-5150.

[46]Yokozuka M, Koide K, Oishi S, et al. LiTAMIN2: Ultra light lidar-based slam using geometric approximation applied with KL-divergence[C]//2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 11619-11625.

[47]Zhang J, Singh S. LOAM: Lidar odometry and mapping in real-time[C]//Robotics: Science and Systems. 2014, 2(9): 1-9.

[48]Moré J J. The Levenberg-Marquardt algorithm: implementation and theory[M]//Numerical analysis. Springer, Berlin, Heidelberg, 1978: 105-116.

[49]Wang H, Wang C, Chen C L, et al. F-loam: Fast lidar odometry and mapping[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021: 4390-4396.

[50]Shan T, Englot B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 4758-4765.

[51]Lobo J, Dias J. Relative pose calibration between visual and inertial sensors[J]. The International Journal of Robotics Research, 2007, 26(6): 561-575.

[52]Dong-Si T C, Mourikis A I. Estimator initialization in vision-aided inertial navigation with unknown camera-IMU calibration[C]//2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012: 1064-1071.

[53]Hol J D, Schön T B, Gustafsson F. Modeling and calibration of inertial and vision sensors[J]. The international journal of robotics research, 2010, 29(2-3): 231-244.

[54]Unbehauen H, Rao G P. A review of identification in continuous-time systems[J]. Annual reviews in Control, 1998, 22: 145-171.

[55]Furgale P, Rehder J, Siegwart R. Unified temporal and spatial calibration for multi-sensor systems[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013: 1280-1286.

[56]Li M, Yu H, Zheng X, et al. High-fidelity sensor modeling and self-calibration in vision-aided inertial navigation[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 409-416.

[57]Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803.

[58]Campos C, Elvira R, Rodríguez J J G, et al. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.

[59]Qin T, Shen S. Robust initialization of monocular visual-inertial estimation on aerial robots[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 4225-4232.

[60]Sola J. Quaternion kinematics for the error-state Kalman filter[J]. arXiv preprint arXiv:1711.02508, 2017.

[61]Ethz-asl. lidar align: A simple method for finding the extrinsic calibration between a 3D lidar and a 6-dof pose sensor[EB/OL]. https://github.com/ethz-asl/lidar_align, 2018.

[62]Lv J, Xu J, Hu K, et al. Targetless calibration of lidar-imu system based on continuous-time batch estimation[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020: 9968-9975.

[63]Li M, Mourikis A I. Online temporal calibration for camera–IMU systems: Theory and algorithms[J]. The International Journal of Robotics Research, 2014, 33(7): 947-964.

[64]Shan T, Englot B, Ratti C, et al. Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping[C]//2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 5692-5698.

[65]Shao W, Vijayarangan S, Li C, et al. Stereo visual inertial lidar simultaneous localization and mapping[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 370-377.

[66]Steder B, Ruhnke M, Grzonka S, et al. Place recognition in 3D scans using a combination of bag of words and point feature based relative pose estimation[C]//2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2011: 1249-1255.

[67]Caselitz T, Steder B, Ruhnke M, et al. Monocular camera localization in 3d lidar maps[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 1926-1931.

[68]Steder B, Grisetti G, Burgard W. Robust place recognition for 3D range data based on point features[C]//2010 IEEE International Conference on Robotics and Automation. IEEE, 2010: 1400-1405.

[69]Graeter J, Wilczynski A, Lauer M. Limo: Lidar-monocular visual odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 7872-7879.

[70]Shin Y S, Park Y S, Kim A. DVL-SLAM: Sparse depth enhanced direct visual-LiDAR SLAM[J]. Autonomous Robots, 2020, 44(2): 115-130.

[71]Zhang J, Kaess M, Singh S. A real-time method for depth enhanced visual odometry[J]. Autonomous Robots, 2017, 41(1): 31-43.

[72]Zheng C, Zhu Q, Xu W, et al. FAST-LIVO: Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry[J]. arXiv preprint arXiv:2203.00893, 2022.

[73]Lupton T, Sukkarieh S. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions[J]. IEEE Transactions on Robotics, 2011, 28(1): 61-76.

[74]Forster C, Carlone L, Dellaert F, et al. IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation[C]. Georgia Institute of Technology, 2015.

[75]Leutenegger S, Furgale P, Rabaud V, et al. Keyframe-based visual-inertial slam using nonlinear optimization[J]. Proceedings of Robotis Science and Systems (RSS) 2013, 2013.[44]

[76]Derpanis K G. The harris corner detector[J]. York University, 2004, 2.

[77]Leutenegger S, Chli M, Siegwart R Y. BRISK: Binary robust invariant scalable keypoints[C]//2011 International conference on computer vision. Ieee, 2011: 2548-2555.

[78]Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.

[79]Gálvez-López D, Tardos J D. Bags of binary words for fast place recognition in image sequences[J]. IEEE Transactions on Robotics, 2012, 28(5): 1188-1197.

[80]Cao S, Lu X, Shen S. Gvins: Tightly coupled gnss-visual-inertial fusion for smooth and consistent state estimation[J]. IEEE Transactions on Robotics, 2022.

[81]Mourikis A I, Roumeliotis S I. A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation[C]//ICRA. 2007, 2: 6.

[82]Li M, Mourikis A I. Improving the accuracy of EKF-based visual-inertial odometry[C]//2012 IEEE International Conference on Robotics and Automation. IEEE, 2012: 828-835.

[83]Huang G P, Mourikis A I, Roumeliotis S I. A first-estimates Jacobian EKF for improving SLAM consistency[C]//Experimental Robotics. Springer, Berlin, Heidelberg, 2009: 373-382..

[84]Huai Z, Huang G. Robocentric visual–inertial odometry[J]. The International Journal of Robotics Research, 2022, 41(7): 667-689.

[85]Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020: 4666-4672.

[86]Mourikis A I, Roumeliotis S I, Burdick J W. SC-KF mobile robot localization: A stochastic cloning Kalman filter for processing relative-state measurements[J]. IEEE Transactions on Robotics, 2007, 23(4): 717-730.

[87]Le Gentil C, Vidal-Calleja T, Huang S. In2lama: Inertial lidar localisation and mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 6388-6394.

[88]Qin C, Ye H, Pranata C E, et al. R-lins: A robocentric lidar-inertial state estimator for robust and efficient navigation[J]. 2019.

[89]Bell B M, Cathey F W. The iterated Kalman filter update as a Gauss-Newton method[J]. IEEE Transactions on Automatic Control, 1993, 38(2): 294-297.

[90]Qin C, Ye H, Pranata C E, et al. Lins: A lidar-inertial state estimator for robust and efficient navigation[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020: 8899-8906.

[91]Xu W, Zhang F. Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 3317-3324.

[92]Xu W, Cai Y, He D, et al. Fast-lio2: Fast direct lidar-inertial odometry[J]. IEEE Transactions on Robotics, 2022.

[93]Cai Y, Xu W, Zhang F. ikd-Tree: An incremental KD tree for robotic applications[J]. arXiv preprint arXiv:2102.10808, 2021.

[94]Bai C, Xiao T, Chen Y, et al. Faster-LIO: Lightweight Tightly Coupled Lidar-Inertial Odometry Using Parallel Sparse Incremental Voxels[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 4861-4868.

[95]Ye H, Chen Y, Liu M. Tightly coupled 3d lidar inertial odometry and mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 3144-3150.

[96]Shan T, Englot B, Meyers D, et al. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020: 5135-5142.

[97]Zhang J, Singh S. Laser–visual–inertial odometry and mapping with high robustness and low drift[J]. Journal of Field Robotics, 2018, 35(8): 1242-1264.

[98]Zhao S, Zhang H, Wang P, et al. Super odometry: IMU-centric LiDAR-visual-inertial estimator for challenging environments[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021: 8729-8736.

[99]Zuo X, Geneva P, Lee W, et al. Lic-fusion: Lidar-inertial-camera odometry[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 5848-5854.

[100]Sun K, Mohta K, Pfrommer B, et al. Robust stereo visual inertial odometry for fast autonomous flight[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 965-972.

[101]Zuo X, Yang Y, Geneva P, et al. Lic-fusion 2.0: Lidar-inertial-camera odometry with sliding-window plane-feature tracking[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020: 5112-5119.

[102]Lin J, Zheng C, Xu W, et al. R2LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 7469-7476.

[103]He D, Xu W, Zhang F. Kalman Filters on Differentiable Manifolds[J]. arXiv preprint arXiv:2102.03804, 2021.

[104]Lin J, Zhang F. R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package[C]//2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022: 10672-10678.

[105]Wisth D, Camurri M, Das S, et al. Unified Multimodal landmark tracking for tightly coupled lidar-visual-inertial odometry[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1004-1011.

[106]Heyden A, Pollefeys M. Multiple view geometry[J]. Emerging Topics in Computer Vision, 2005, 3: 45-108.

[107]张宗华, 刘巍, 刘国栋,等. 三维视觉测量技术及应用进展[J]. 中国图象图形学报, 2021,26(06):1483-1502.

[108]刘艳, 李腾飞. 对张正友相机标定法的改进研究[J]. 光学技术, 2014(6):6.

[109]Maye J, Furgale P, Siegwart R. Self-supervised calibration for robotic systems[C]//2013 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2013: 473-480.

[110]Rehder J, Nikolic J, Schneider T, et al. Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 4304-4311.

[111]Fetić A, Jurić D, Osmanković D. The procedure of a camera calibration using Camera Calibration Toolbox for MATLAB[C]//2012 Proceedings of the 35th International Convention MIPRO. IEEE, 2012: 1752-1757.

[112]Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision[M]. 1981.

[113]Horn B K P. Recovering baseline and orientation from essential matrix[J]. Journal of the Optical Society, 1990, 110.

[114]Lepetit V, Moreno-Noguer F, Fua P. Epnp: An accurate o (n) solution to the pnp problem[J]. International Journal of Computer Vision, 2009, 81(2): 155-166.

[115]哈特利. 计算机视觉中的多视图几何(第2版)[M]. 安徽大学出版社, 2020.

[116]李庆扬. 数值分析[M]. 清华大学出版社有限公司, 2001.

[117]Li S, Xu C, Xie M. A robust O (n) solution to the perspective-n-point problem[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1444-1450.

[118]Quan L, Lan Z. Linear n-point camera pose determination[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(8): 774-780.

[119]Gao X S, Hou X R, Tang J, et al. Complete solution classification for the perspective-three-point problem[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(8): 930-943.

[120]Persson M, Nordberg K. Lambda twist: An accurate fast robust perspective three point (p3p) solver[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 318-332.

[121]Kneip L, Li H, Seo Y. Upnp: An optimal o (n) solution to the absolute pose problem with universal applicability[C]//European Conference on Computer Vision. Springer, Cham, 2014: 127-142.

[122]Terzakis G, Lourakis M. A consistently fast and globally optimal solution to the perspective-n-point problem[C]//European Conference on Computer Vision. Springer, Cham, 2020: 478-494.

[123]周红进, 钟云海, 易成涛. MEMS惯性导航传感器[J]. 舰船科学技术, 2014(1):7.

[124]Stark H, Woods J W. Probability and random processes with applications to signal processing[M]. Prentice hall, 2002.

[125]孙丽, 秦永元. 捷联惯导系统姿态算法比较[J]. 中国惯性技术学报, 2006(03):6-10.

[126]熊有伦. 机器人技术基础[M]. 机械工业出版社, 1996.

[127]Jones E M, Fjeld P. Gimbal angles, gimbal lock, and a fourth gimbal for Christmas[J]. Retrieved September, 2011, 24: 2017.

[128]Reinsch M W. A simple expression for the terms in the Baker–Campbell–Hausdorff series[J]. Journal of Mathematical Physics, 2000, 41(4): 2434-2442.

[129]Olson E B. Real-time correlative scan matching[C]//2009 IEEE International Conference on Robotics and Automation. IEEE, 2009: 4387-4393.

[130]Zhang Z, Scaramuzza D. A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 7244-7251.

[131]Geiger A, Lenz P, Stiller C, et al. Vision meets robotics: The kitti dataset[J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237.

[132]Grupp M. evo: Python package for the evaluation of odometry and slam[EB/OL]. https://github.com/MichaelGrupp/evo, 2017.

[133]Nikolic J, Rehder J, Burri M, et al. A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 431-437.

[134]Li M, Yu H, Zheng X, et al. High-fidelity sensor modeling and self-calibration in vision-aided inertial navigation[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 409-416.

[135]Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803.

[136]Dai J S. Euler–Rodrigues formula variations, quaternion conjugation and intrinsic connections[J]. Mechanism and Machine Theory, 2015, 92: 144-152.

[137]Dam E B, Koch M, Lillholm M. Quaternions, interpolation and animation[M]. Copenhagen: Datalogisk Institut, Københavns Universitet, 1998.

[138]Bertsekas D P. Nonlinear programming[J]. Journal of the Operational Research Society, 1997, 48(3): 334-334.

[139]MacTavish K, Barfoot T D. At all costs: A comparison of robust cost functions for camera correspondence outliers[C]//2015 12th conference on computer and robot vision. IEEE, 2015: 62-69.

[140]Burri M, Nikolic J, Gohl P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10): 1157-1163.

[141]Miura K, Tokunaga S, Ota N, et al. Autoware toolbox: Matlab/simulink benchmark suite for ros-based self-driving software platform[C]//Proceedings of the 30th International Workshop on Rapid System Prototyping (RSP'19). 2019: 8-14.

[142]Zhang J, Singh S. Low-drift and real-time lidar odometry and mapping[J]. Autonomous Robots, 2017, 41(2): 401-416.

[143]Delmerico J, Scaramuzza D. A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 2502-2509.

[144]Paul M K, Wu K, Hesch J A, et al. A comparative analysis of tightly-coupled monocular, binocular, and stereo VINS[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 165-172.

[145]Neunert M, Giftthaler M, Frigerio M, et al. Fast derivatives of rigid body dynamics for control, optimization and estimation[C]//2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR). IEEE, 2016: 91-97.

[146]Ulamis F, Luy M, Cam E, et al. Effects of zero velocity update on total displacement for indoor inertial positioning systems [J]. Int. J. Intell. Syst. Appl. Eng, 2017, 2: 59-63.

[147]Xiaofang L, Yuliang M, Ling X, et al. Applications of zero-velocity detector and Kalman filter in zero velocity update for inertial navigation system[C]//Proceedings of 2014 IEEE Chinese Guidance, Navigation and Control Conference. IEEE, 2014: 1760-1763.

[148]Leutenegger S, Furgale P, Rabaud V, et al. Keyframe-based visual-inertial slam using nonlinear optimization[J]. Proceedings of Robotis Science and Systems (RSS) 2013, 2013.

[149]Liu H, Chen M, Zhang G, et al. Ice-ba: Incremental, consistent and efficient bundle adjustment for visual-inertial slam[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1974-1982.

[150]The Schur complement and its applications [M]. Springer Science & Business Media, 2006.

[151]Nguyen T M, Yuan S, Cao M, et al. Ntu viral: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint[J]. The International Journal of Robotics Research, 2022, 41(3): 270-280.

[152]Sünderhauf N, Protzel P. Towards a robust back-end for pose graph slam[C]//2012 IEEEI International Conference on Robotics and Automation. IEEE, 2012: 1254-1261.

[153]Kim G. Robust LiDAR SLAM with a versatile plug-and-play loop closing and pose-graph optimization [EB/OL]. https://github.com/gisbi-kim/SC-A-LOAM, 2021.

[154]Kim G, Kim A. Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 4802-4809.

[155]Hornung A, Wurm K M, Bennewitz M, et al. OctoMap: An efficient probabilistic 3D mapping framework based on octrees[J]. Autonomous robots, 2013, 34(3): 189-206.

[156]Behley J, Steinhage V, Cremers A B. Efficient radius neighbor search in three-dimensional point clouds[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015: 3625-3630.

[157]Kaess M, Johannsson H, Roberts R, et al. iSAM2: Incremental smoothing and mapping using the Bayes tree[J]. The International Journal of Robotics Research, 2012, 31(2): 216-235.

[158]Wang H, Wang C, Xie L. Intensity scan context: Coding intensity and geometry relations for loop closure detection[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020: 2095-2101.

[159]Wang Y, Sun Z, Xu C Z, et al. Lidar iris for loop-closure detection[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020: 5769-5775.

[160]Dellaert F. Factor graphs and GTSAM: A hands-on introduction[R]. Georgia Institute of Technology, 2012.

[161]Dellaert F, Kaess M. Factor graphs for robot perception[J]. Foundations and Trends® in Robotics, 2017, 6(1-2): 1-139.

[162]马凯, 林义忠. 移动机器人视觉导航技术综述[J]. 物流科技, 2020, 43(10): 39-41.

[163]易柯敏, 沈艳霞. 激光 SLAM 导航移动机器人定位算法研究综述[J]. 机器人技术与应用, 2019 (5): 25-28.

[164]Wulf O, Nuchter A, Hertzberg J, et al. Ground truth evaluation of large urban 6D SLAM[C]//2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2007: 650-657.

[165]Su Y, Wang T, Shao S, et al. GR-LOAM: LiDAR-based sensor fusion SLAM for ground robots on complex terrain[J]. Robotics and Autonomous Systems, 2021, 140: 103759.

[166]Wang T, Su Y, Shao S, et al. GR-Fusion: Multi-sensor Fusion SLAM for Ground Robots with High Robustness and Low Drift[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021: 5440-5447.

中图分类号:

 TP242.6    

开放日期:

 2024-03-21    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式