- 无标题文档
查看论文信息

论文中文题名:

 融合双目相机和IMU的AGV同步定位建图研究    

姓名:

 曹天义    

学号:

 20205224073    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 085500    

学科名称:

 工学 - 机械    

学生类型:

 硕士    

学位级别:

 工程硕士    

学位年度:

 2024    

培养单位:

 西安科技大学    

院系:

 机械工程学院    

专业:

 机械工程    

研究方向:

 工程车辆自动驾驶    

第一导师姓名:

 牛秦玉    

第一导师单位:

 西安科技大学    

论文提交日期:

 2024-06-24    

论文答辩日期:

 2024-05-31    

论文外文题名:

 Synchronous Positioning of AGV based on Binocular Camera and IMU Mapping research    

论文中文关键词:

 点线结合 ; 位姿估计 ; DBoW2模型 ; 光流跟踪 ; 回环校正    

论文外文关键词:

 Point-line combination ; Pose estimation ; DBoW2 model ; Optical flow tracking ; Loop correction    

论文中文摘要:

为了提高SLAM系统在弱纹理环境下的鲁棒性,使用相机和IMU融合的方式估计机器人位姿。在相机图像中提取点线特征,考虑到现有的线段特征处理方法在线段检测和线段跟踪的过程中占用了大量计算资源,本文提出了替代这些步骤的线段半直接跟踪算法,采用改进后基于点线特征的视觉惯性 SLAM 算法在运行速度上有了明显提高,并且相比于传统算法有更高的轨迹精度和鲁棒性。

(1)深入解析了图像畸变的原因,对IMU各种误差数据也进行了具体分析,在此基础上对视觉传感器和惯性传感器分别做了标定。录制相机和IMU联合数据包,然后用相机和IMU单独标定的结果对联合数据包进行标定,得到相机与IMU的变换矩阵,IMU噪声数据,双目相机误差系数等。这为后续的视觉惯性里程计系统提供了较为精确的初始信息。

(2)设计了基于点线特征的双目视觉惯性里程计。对图像中的 ORB 点特征和LSD线特征进检测、提取和匹配,通过最小化点特征和线特征的重投影误差来优化相机的相对运动。为了降低线特征跟踪误差带来的影响,将线特征的重投影误差定义为投影线段的两个端点到观测线段的距离之和。为了减少添加线特征增加的算力负荷,提出了半直接线段跟踪算法,在线段的跟踪过程中对已知线段执行基于光流的的半直接线段跟踪算法,然后进行离群线段的剔除和关键帧判断。该算法在非关键帧中取消了线特征的检测、提取和匹配步骤,仅在关键帧中进行线特征的补充。经过实验分析,改进后的算法线特征追踪的速率明显提高,因此基于点线视觉惯性里程计的整体运行速度有所提高。

(3)对视觉 SLAM 的后端优化部分进行设计。在进行视觉惯导联合初始化后,对前端提供的数据信息进行紧耦合的滑动窗口优化,在回环检测部分,通过 DBoW2 库构建基于点线特征的回环检测库,当发生回环闭合时,通过求解图优化问题,矫正累积误差。最后搭建自主导航全向AGV移动实验平台,验证算法的有效性。

论文外文摘要:

In order to improve the robustness of SLAM system in weak texture environment, the pose of robot is estimated by combining camera and IMU. In order to extract point-line features from camera images, considering that the existing line feature processing methods occupy a lot of computing resources in the process of line detection and line tracking, this paper proposes a semi-direct line tracking algorithm to replace these steps. The improved visual inertia SLAM algorithm has obviously improved the running speed, and has higher trajectory accuracy and robustness than the traditional algorithm.

(1) This paper deeply analyzes the causes of image distortion, and makes a concrete analysis of various error data of IMU. On this basis, the vision sensor and inertial sensor are calibrated respectively. Record the joint data packet of camera and IMU, and then calibrate the joint data packet with the results of separate calibration of camera and IMU, and get the transformation matrix of camera and IMU, IMU noise data, binocular camera error coefficient and so on. This provides more accurate initial information for the subsequent visual inertial odometer system.

(2) A binocular vision inertial odometer based on dotted line feature is designed. The ORB point features and LSD line features in the image are detected, extracted and matched, and the relative motion of the camera is optimized by minimizing the re-projection error of the point features and line features. In order to reduce the influence of line feature tracking error, the re-projection error of line feature is defined as the sum of the distances from the two endpoints of the projected line segment to the observation line segment. In order to reduce the computational load of adding line features, a semi-direct line segment tracking algorithm is proposed. In the process of line segment tracking, the semi-direct line segment tracking algorithm based on optical flow is implemented for known line segments, and then outlier line segments are eliminated and key frames are judged. In this algorithm, the detection, extraction and matching steps of line features are cancelled in non-key frames, and only the line features are supplemented in key frames. After experimental analysis, the improved algorithm obviously improves the speed of line feature tracking, so the overall running speed of the inertial odometer based on point-and-line vision is improved.

(3) Design the back-end optimization part of visual SLAM. After the joint initialization of the visual inertial navigation system, the data information provided by the front-end is optimized by a tightly coupled sliding window. In the loop detection part, a loop check library based on two characteristics of points and lines is built through the DBoW2 library. When the loop is closed, the accumulated error is corrected by solving the graph optimization problem. Finally, the experimental platform of autonomous navigation omni-directional AGV is built to verify the effectiveness of the algorithm.

参考文献:

[1]Durrant-Whyte H, Bailey T. Simultaneous localization and mapping: part I[J]. IEEE robotics

& automation magazine, 2006, 13(2): 99-110.

[2]Bailey T, Durrant-Whyte H. Simultaneous localization and mapping (SLAM): Part II[J].IEEE Robotics & Automation Magazine, 2006, 13(3): 108-117.

[3]Smith R, Self M, Cheeseman P. Estimating uncertain spatial relationships in Robotics [C]/Proceedings of the Second Conference on Uncertainty in Artificial Intelligence. AUAI Press, 1986: 267-288.

[4] Ali S S, Hammad A, Eldien A S T. Fastslam 2.0 tracking and mapping as a cloud robotics

service[J]. Computers & Electrical Engineering, 2018, 69: 412-421.

[5] Li S, Zhang T, Gao X, et al. Semi-direct monocular visual and visual-inertial SLAM with loop closure detection[J]. Robotics and Autonomous Systems, 2019, 112: 201-210.

[6]Tong Qin, Peiliang Li, Shaojie Shen. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. 2017.

[7]Mourikis A I, Roumeliotis S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//Robotics and automation, 2007 IEEE international conference on.IEEE,2007: 3565-3572.

[8] Usenko V, Engel J, Stuckler J, et al. Direct visual-inertial odometry with stere cameras[C]// IEEE International Conference on Robotics and Automation (ICRA). Stockholm, 2016:1885-1892

[9] Menin A, Torchelsen R, Nedel L. An Analysis of VR Technology Used in Immersive Simulations with a Serious Game Perspective[J]. IEEE Computer Graphics and Applications, 2018,38(2):57-73.

[10]Shen, S.; Michael, N.; Kumar, V . Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MA Vs. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 5303–5310.

[11]Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]/Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on.IEEE, 2007: 225-234.

[12]Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.

[13]Mur-Artal R, Tardós J D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo , and RGB-D Cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.

[14]Campos C, Elvira R, Rodríguez J J G, et al. Orb-slam3: An accurate open-source library for visual,visual–inertial, and multimap slam[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.

[15]Jakob Engel, Thomas Schöps, Daniel Cremers. LSD-SLAM: Large-Scale Direct Monocular SLAM[C]. Berlin, Germany:European Conference on Computer Vision., 2014:834-849.

[16]Christian Forster, Zichao Zhang, Michael Gassner. SVO: Semi-Direct Visual Odometry for Monocular and Multi-Camera Systems, 2016.

[17]Zhou H Zou D, Pei L. et al. StructSLAM: Visual SLAM with building structure lines[J].IEEE Transactions on Vehicular Technology,2015,64(4):1364-1375 .

[18] Kaess M , Johannsson H , Roberts R , et al. iSAM2: Incremental smoothing and mapping using the Bayes tree[J]. The International Journal of Robotics Research, 2012, 31(2):216-235.

[19] Vakhitov A,Funke J,Moreno-Noguer F .ccurate and linear time pose estimation from points and lines[C] . European Conference on Computer Vision,Springer,Cham,2016:583-599 .

[20] Li W, Pan C W, Zhang R, et al. AADS: Augmented autonomous driving simulation using data-driven algorithms[J]. Science Robotics, 2019, 4(28): 1-24.

[21]Zhou H, Zou D, Pei L, et al. StructSLAM: Visual SLAM with building structure lines[J]. IEEE Transactions on Vehicular Technology, 2015, 64(4): 1364-1375.

[22]He Y, Zhao J, Guo Y, et al. PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features[J]. Sensors, 2018, 18(4): 1159.

[23]谢晓佳. 基于点线综合特征的双目视觉SLAM方法[D]. 浙江, 浙江大学, 2017.

[24]Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments[C]//2016 IEEE International Conference on Robotics andAutomation (ICRA). IEEE, 2016: 2521-2526.

[25]Pumarola A, Vakhitov A, Agudo A, et al. PL-SLAM: Real-time monocular visual SLAM with points and lines[C]//2017 IEEE international conference on robotics and automation (ICRA).IEEE, 2017: 4503-4508.

[26]Baker S,Matthews I .Lucas-kanade 20 years on: A unifying framework[J] .International journal of computer vision, 2004, 56(3):221-255 .

[27] Aggarwal P, Syed Z, Niu X, et al. A Standard Testing and Calibration Procedure for Low

Cost MEMS Inertial Sensors and Units[J]. The Journal of Navigation, 2008, 61(2):p.323-336.

[28] Maye J, Furgale P, Siegwart R.Self-supervised calibration for robotic systems[C].2013 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2013: 473-480.

[29] Sponsor. IEEE Standard Specification Format Guide and Test Procedure for Single-Axis

Laser Gyros[C]. IEEE Std. IEEE, 2002.

[30]高翔, 张涛, 刘毅.视觉SLAM十四讲:从理论到实践[M]. 电子工业出版社, 2017.

[31] Furgale P, Rehder J, Siegwart R. Unified temporal and spatial calibration for multi-sensor

systems[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE,2013: 1280-1286.

[32]Zhang Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2000, 22(11): 1330-1334.

[33]Geiger A, Lenz P, Stiller C. et al .Vision meets robotics: The kitti dataset[J] .The International Journal of Robotics Research,2013,32(11):1231-1237 .

[34]Burri M, Nikolic J, Gohl P,et al .The EuRoC micro aerial vehicle datasets[J] .The International Journal of Robotics Research, 2016, 35(10):1157-1163 .

[35]Furgale P, Rehder J, Siegwart R. Unified temporal and spatial calibration for multi-sensor

systems [C]. 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

IEEE,2013:1280-1286.

[36]Maye J, Furgale P, Siegwart R.Self-supervised calibration for robotic systems[C].2013 IEEE Intelligent Vehicles Symposium (IV). IEEE,2013:473-480.

[37]Oth L, Furgale P, Kneip L, et al. Rolling shutter camera calibration[C]. Proceedings of the

IEEE Conference on Computer Vision and Pattern Recognition.2013:1360-1367.

[38]Rehder J, Nikolic J, Schneider T, et al. A direct formulation for camera calibration[C].IEEE International Conference on Robotics & Automation. IEEE, 2017: 6479-6486.

[49]王坚, 梁建, 韩厚增. 低成本IMU的多位置旋转现场标定方法[J].中国惯性技术学报, 2017, 25(3):294-298.

[40] K. R. Britting, Interial Navigation Systems Anlysis : John Wiley & Sons, Inc. 1971.

[41] El-Sheimy, Naser, Haiying Hou, and Xiaoji Niu. "Analysis and modeling of inertial sensors using Allan variance."IEEE Transactions on instrumentation and measurement57.1 (2007): 140-149.

[42] Sponsor. IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Laser Gyros[C]. IEEE Std. IEEE, 2002.

[43] Aggarwal P, Syed Z, Niu X, et al. A Standard Testing and Calibration Procedure for Low Cost MEMS Inertial Sensors and Units[J]. The Journal of Navigation, 2008, 61(2):p.323-336.

[44] Harris C G, Stephens M . A Combined Corner and Edge Detector[C]. Taylor C J. Proceedings of the Alvey Vision Conference, AVC 1988, Manchester, UK,September, 1988. Alvey Vision Club,1988:1–6.

[45] Shi J. Good features to track[C]//1994 Proceedings of IEEE conference on computer vision and pattern recognition. IEEE, 1994: 593-600.

[46] Rublee E,Rabaud V,Konolige K,et al . ORB: An efficient alternative to SIFT or SURF[C] . 2011 International conference on computer vision,2011:2564-2571 .

[47] Rosten E, Drummond T. Machine learning for high-speed corner detection[C]//European conference on computer vision. Springer, Berlin, Heidelberg, 2006: 430-443.

[48] Baker S,Matthews I Lucas-kanade 20 years on: A unifying framework[J] . International journal of computer vision, 2004, 56(3):221-255 .

[49] Chum O, Matas J, Kittler J. Locally optimized RANSAC[C]//Joint Pattern Recognition Symposium. Springer, Berlin, Heidelberg, 2003: 236-243.

[50] Vakhitov A, Funke J,Moreno-Noguer F . ccurate and linear time pose estimation from points and lines[C] . European Conference on Computer Vision,Springer,Cham2016:583-599,

[51] Gálvez-López D, Tardos J D . Bags of binary words for fast place recognition in image sequences[J] . IEEE Transactions on Robotics,2012,28(5):1188-1197 .

[52] Bloesch M , Omari S , Hutter M , et al. Robust visual inertial odometry using a direct EKF-based approach[C]// IEEE/RSJ International Conference on Intelligent Robots & Systems. IEEE,2015:298-304.

[53] Burri M, Nikolic J, Gohl P,et al .The EuRoC micro aerial vehicle datasets[J] The International Journal of Robotics Research,2016,35(10):1157-1163 .

[54] Geiger A, Lenz P, Stiller C,et al .Vision meets robotics: The kitti dataset[J] .The International Journal of Robotics Research,2013,32(11):1231-1237 .

[55]Qin T, Li P, Shen S . Vins-mono: A robust and versatile monocular visual-inertial state estimator[J] . IEEE Transactions on Robotics,2018,34(4):1004-1020 .

中图分类号:

 TP242.6    

开放日期:

 2024-06-25    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式