- 无标题文档
查看论文信息

论文中文题名:

 融合视觉与激光雷达的巷道地图构建与掘进机定位方法研究    

姓名:

 吴雨佳    

学号:

 20205016017    

保密级别:

 保密(1年后开放)    

论文语种:

 chi    

学科代码:

 080202    

学科名称:

 工学 - 机械工程 - 机械电子工程    

学生类型:

 硕士    

学位级别:

 工学硕士    

学位年度:

 2023    

培养单位:

 西安科技大学    

院系:

 机械工程学院    

专业:

 机械工程    

研究方向:

 智能检测与控制    

第一导师姓名:

 张旭辉    

第一导师单位:

 西安科技大学    

论文提交日期:

 2023-06-13    

论文答辩日期:

 2023-06-03    

论文外文题名:

 Research on Roadway Map Construction and Roadheader Localization Method by Fusing Vision and LiDAR    

论文中文关键词:

 掘进机定位 ; 视觉 ; 激光雷达 ; 巷道地图构建    

论文外文关键词:

 Roadheade positioning ; Visual ; LiDAR ; Roadway map construction    

论文中文摘要:

煤矿井下掘进设备位姿测量与定位技术已经成为实现掘进工作面自动化的瓶颈难题。由于煤矿井下环境恶劣,无法使用全球定位系统的现实情况,且多因素影响,地面常见定位技术难以实际应用。近年来,借助合作标靶的掘进位姿视觉检测方法由于其具有非接触、高精度等优势,得到一定程度的推广应用,但面临合作标靶视觉标定、移动标靶和移站耗时等方面的挑战。因此,本文提出一种融合视觉与激光雷达的巷道地图构建与同步定位方法,获取巷道几何特征构建环境地图,辅助实现掘进机位姿测量,对激光雷达地图构建、基于视觉巷道重建与定位以及融合同时定位与建图(Simultaneous Localization and Mapping,SLAM)方法进行研究,对推动煤矿掘进工作面少人甚至无人生产具有重要意义。主要研究内容如下:

论文研究激光雷达与相机融合的三维地图构建与掘进机定位方法,紧扣无合作标靶掘进机位姿测量难题,通过对比国内外煤矿巷道地图构建方法及众多定位技术,提出一种巷道地图构建与掘进机定位方案。构建三维点云巷道地图辅助掘进设备定位,实现无合作标靶的掘进机定位,为煤矿巷道环境中设备的长距离定位与导航奠定了基础。

针对巷道地图构建过程中出现的激光点云运动畸变等问题,论文研究基于激光雷达的煤矿三维地图构建方法。采集激光点云并进行预处理,同时研究点云双边滤波和特征提取方法;应用广义迭代最近点的点云匹配方法,进行帧间位姿估计,从而获取在机器人移动过程中不同视角下的多组一致性点云,利用这些点云构建具有统一坐标系的巷道三维点云地图。确保激光雷达地图构建的精确性和高效性,为融合建图提供了有力支持。

针对煤矿巷道环境特征匮乏的难题,论文提出一种基于视觉的设备定位与地图构建方法。构建相机内参标定模型并求解内参;采用快速特征点提取和描述(Oriented Fast and Rotated Brief,ORB)算法实现环境特征提取并完成特征点匹配,运用基于网格的运动统计(Grid-based Motion Statistics,GMS)算法消除误匹配;采用多点透视成像(Perspective-n-Point,PnP)法进行位姿估计,在非线性优化处理后,实现了三维点云地图的实时生成。为掘进机在定位与自主导航方面提供地图信息支持,为激光雷达与视觉系统的融合创造了条件。

针对煤矿巷道环境复杂、弱纹理及掘进机全局定位难的困境,论文研究融合视觉和激光雷达的SLAM算法框架,实现掘进设备的同步定位和地图建立。首先,对相机及激光雷达进行联合标定,明确观测数据间的空间关系;将视觉里程计的姿态评估作为点云配准提供初始值,从而保证点云配准的准确性;在地图构建与优化过程中,采用广义迭代最近点配准技术,提升姿态估计的精确度;在回环检测部分,使用了视觉特征词袋检测方法,并通过回环校正优化来实现融合地图的构建和计算掘进机的位姿。

最终,搭建视觉与激光雷达融合三维地图构建与定位实验平台,验证本文所提理论和方法。通过单一视觉、激光SLAM方法和二者融合SLAM方法之间的对比测试,结果表明,融合SLAM方法的定位精度更高。同时在煤炭主体实验室模拟巷道内,对巷道地图构建方法进行实验,获得模拟巷道的三维点云地图,验证了巷道地图构建方法和设备定位技术的可行性。

论文外文摘要:

The position measurement and positioning technology of underground digging equipment in coal mines has become a bottleneck problem to realize the automation of digging working face. Due to the reality that GPS is not available in the harsh environment of underground coal mines and the influence of multiple factors, it is difficult to apply common positioning technologies on the ground in practice. In recent years, the visual inspection method of digging equipment position with the help of cooperative target has been popularized and applied to some extent due to its advantages of non-contact and high accuracy, but it faces challenges in visual calibration of cooperative target, moving target and time-consuming moving station. Therefore, this paper proposes a roadway map construction and simultaneous localization method integrating vision and LiDAR to obtain roadway geometric features to construct an environment map and assist in realizing digger position measurement, and conducts research on LiDAR map construction, vision-based roadway reconstruction and localization, and fused SLAM methods, which are of great significance in promoting less manned or even unmanned production in coal mine excavation working faces. The main research contents are as follows: The paper explores the 3D map construction and roadheader positioning method of LIDAR and camera fusion, focuses on the difficult problem of non-cooperative target roadheader position measurement, and proposes a roadway map construction and roadheader positioning scheme by comparing domestic and foreign coal mine roadway map construction methods and numerous positioning technologies. By comparing domestic and foreign coal mine roadway map construction methods and many positioning technologies, we propose a roadway map construction and roadway machine positioning solution. We build a 3D point cloud roadway map to assist in roadway equipment positioning and realize coal mine roadway positioning without cooperative targets, which lays the foundation for long-distance positioning and navigation of coal mine roadway.

Aiming at the problem of laser point cloud motion distortion in the process of roadway map construction, this paper studies the 3D map construction method of coal mine based on lidar. The point cloud data is collected by lidar, and the point cloud data is preprocessed. At the same time, the point cloud bilateral filtering method and feature extraction method are deeply studied. The GICP point cloud matching technology is applied to estimate the pose between frames, so as to obtain multiple sets of consistent point clouds from different perspectives during the movement of the robot. These point cloud data are used to construct a three-dimensional point cloud map of the roadway with a unified coordinate system. Ensuring the accuracy and efficiency of Lidar map construction provides strong support for fusion mapping.

Aiming at the problem of lack of environmental characteristics in coal mine roadways, this paper proposes a vision-based equipment positioning and map construction method. The calibration model of camera internal parameters is constructed and the camera internal parameters are solved. ORB algorithm is used to extract environmental features and complete feature point matching, and GMS algorithm is used to eliminate mismatch. PnP method is used to estimate the pose. After nonlinear optimization, the real-time generation of 3D point cloud map is realized. It provides map information support for roadheader in positioning and autonomous navigation, and creates conditions for the integration of lidar and vision system.

Aiming at the dilemma of complex environment, weak texture and difficult global positioning of tunneling robot in coal mine roadway, this paper studies the SLAM algorithm framework integrating vision and lidar to realize synchronous positioning and map establishment of tunneling equipment. Firstly, the camera and lidar are jointly calibrated to clarify the spatial relationship between the observation data. The attitude evaluation of the visual odometer is used as the initial value of point cloud registration to ensure the accuracy of point cloud registration. In the process of map construction and optimization, GICP registration technology is used to improve the accuracy of pose estimation. In the loopback detection part, the visual feature bag detection method is used, and the loopback correction optimization is used to realize the construction of the fusion map and calculate the pose of the tunneling robot.

Finally, a three-dimensional map of vision and lidar fusion is built to construct a pre-positioning experimental platform to verify the theory and method proposed in this paper. Through the performance comparison test between single vision, laser SLAM method and their fusion SLAM method, the results show that the fusion SLAM method has higher positioning accuracy. In the simulated roadway of the coal main laboratory, the roadway map construction method was tested, and the three-dimensional point cloud map of the simulated roadway was obtained, which verified the feasibility of the roadway map construction method and equipment positioning technology.

参考文献:

[1] 王国法,李世军,张金虎,等.筑牢煤炭产业安全奠定能源安全基石[J].中国煤炭,2022,48(07):1-9.

[2] 刘峰,郭林峰,赵路正.双碳背景下煤炭安全区间与绿色低碳技术路径[J].煤炭学报,2022,47(01):1-15.

[3] 张旭辉,杨文娟,薛旭升,等.煤矿远程智能掘进面临的挑战与研究进展[J].煤炭学报,2022,47(01):579-597.

[4] 葛世荣,胡而已,裴文良.煤矿机器人体系及关键技术[J].煤炭学报,2020,45(1):455-463.

[5] 王国法,刘峰,孟祥军,等.煤矿智能化(初级阶段)研究与实践[J].煤炭科学技术,2019,47(08):1-36.

[6] 蔡雨.基于RGB-D摄像机的室内场景三维重建技术研究[D].杭州:浙江大学,2021.

[7] Marr D, Poggio T,Brenner S. A computational theory of human stereo vision[J].Proceeding-s of the Royal Society of London. Series B. Biological Sciences, Royal Society, 1979, 204(1156):301–328.

[8] 谢楠.单目视觉与激光雷达融合的巷道三维重建与掘进机定位方法[D].西安:西安科技大学,2021.

[9] 沈奇峰.巷道视觉重建与设备碰撞预警技术研究[D].西安:西安科技大学

[10] 王国法,王虹,任怀伟,等.智慧煤矿2025情景目标和发展路径[J].煤炭学报,2018,43(2):295-305.

[11] 王国法,赵国瑞,任怀伟.智慧煤矿与智能化开采关键核心技术分析[J].煤炭学报,2019,44(1):34-41.

[12] 张旭辉,沈奇峰,杨文娟,等.基于三激光点标靶的掘进机机身视觉定位技术研究[J].电子测量与仪器学报,2022,36(06):178-186.

[13] 田伟琴,田原,贾曲,等.悬臂式掘进机导航技术研究现状及发展趋势[J].煤炭科学技术,2022,50(03):267-274.

[14] 田原.基于零速修正的掘进机惯性导航定位方法[J].工矿自动化,2019,45(8):70-73.

[15] 吴淼,沈阳,吉晓冬,等.悬臂式掘进机行走轨迹及偏差感知方法[J].煤炭学报,2021,46(07):2046-2056.

[16] 薛光辉,张云飞,候称心,等.基于激光靶向扫描的掘进机位姿测量方法[J].煤炭科学技术,2020,48(11):19-25.

[17] 刘超,符世琛,成龙,等.基于TSOA定位原理混合算法的掘进机位姿检测方法[J].煤炭学报,2019,44(04):1255-1264.

[18] 杨文娟,张旭辉,马宏伟,等.悬臂式掘进机机身及截割头位姿视觉测量系统研究[J].煤炭科学技术,2019,47(06):50-57.

[19] 杜雨馨,刘停,童敏明,等.基于机器视觉的悬臂式掘进机机身位姿检测系统[J].煤炭学报,2016,41(11):2897-2906.

[20] 杨文娟,张旭辉,张超,等.基于三激光束标靶的煤矿井下长距离视觉定位方法[J].煤炭学报,2022,47(02):986-1001.

[21] 张超,张旭辉,杜昱阳,等.基于双目视觉的悬臂式掘进机位姿测量技术研究[J].煤炭科学技术,2021,49(11):225-235.

[22] 张旭辉,刘博兴,张超,等.掘进机全站仪与捷联惯导组合定位方法[J].工矿自动化,2020,46(09):1-7.

[23] 马志艳,邵长松,杨光友,等.同步定位与建图技术研究进展[J/OL].电光与控制:1-10[2023-02-11].

[24] Li Ao et al. DP-SLAM: A visual SLAM with moving probability towards dynamic environ-ments[J].Information Sciences,2021,556:128-142.

[25] 吴凡,宗艳桃,汤霞清.视觉SLAM的研究现状与展望[J].计算机应用研究,2020,37(08):2248-2254.

[26] 杨雪梅,李帅永.移动机器人视觉SLAM回环检测原理、现状及趋势[J].电子测量与仪器学报,2022,36(08):1-12.

[27] Whelan T, Kaess M, Johannsson H, et al. Real-time large-scale dense RGB-D SLAM with volumetric fusion[J]. International Journal of Robotics Research,2015,34(4-5):598-626.

[28] Mur-Artal R, Tardos J D. ORB-SLAM2: an open-source SLAM system for monocular, ste-reo and RGB-D cameras [J]. arXiv preprint arXiv:1610. 06475, 2016.

[29] Campos C, Elvira R, Rodríguez J, et al. ORB-SLAM3: An Accurate Open-Source Library f-or Visual, Visual-Inertial and Multi-Map SLAM[J]. IEEE Transactions on Robotics, 2020:1-17.

[30] Haomin L, Guofeng Z, Hujun B. Robust keyframe-based monocular SLAM for augmented reality [C]// 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct).IEEE,2016.

[31] 刘文瀚,孙凌宇,李庆翔,等.视觉和激光雷达里程计紧耦合的SLAM算法研究[J/OL].激光与光电子学进展:1-16[2023-02-05].

[32] Zhang J, Singh S. LOAM: Lidar Odometry and Mapping in Real-time[C]. Robotics: Scien-ce and Systems.2014,2:9.

[33] Zhang, J, Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast[C].IEEE Interenational Conference on Robotics and Automation (ICRA), 2015:2174-2181.

[34] Bosse M, Zlot R, Flick P. Zebedee: design of a spring-mounted 3-D range sensor with appli-cation to mobile mapping[J]. IEEE Transaction on Robot, 2012, 28(5):1104-1119.

[35] Michael B, Robert Z. Continuous 3D scan-matching with a spinning 2D laser[C]. Robotics and Automation, ICRA’09.IEEE International Conference on. IEEE. 2009:4312-4319.

[36] Mark S, Jorg S, Sven B. Multi-resolution surfel mapping and real-time pose tracking using a continuously rotating 2D laser scanner[C]. Safety, Security, and Rescue Robotics (SSRR), 2013 IEEE International Symposium on. IEEE. 2013:1-6.

[37] Park C, Moghadam P, Kim S, et al. Elastic LiDAR fusion: Dense map-centric continuous-time SLAM[C]. 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018:1206-1213.

[38] Zuo X, Geneva P, Lee W, Liu Y Huang G. LIC-Fusion: LiDAR-Inertial-Camera Odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019:5848-5854.

[39] Qin C, Ye H, Pranata C E, Han J, Zhang S, Liu M. LINS: A Lidar-Inertial State Estimator f-or Robust and Efficient Navigation[C]. IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020: 8899-8906.

[40] Geneva P, Eckenhoff K, Yang Y, Huang G. LIPS: LiDAR-Inertial 3D Plane SLAM[C].IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, 2018: 123-130.

[41] Wen S, Othman K M, Rad A B, et al. Indoor SLAM Using Laser and Camerawith Closed-Loop Controller for NAO Humanoid Robot[J]. Abstract and Applied Analysis, 2014,2014:1-8.

[42] Liang X, Chen H, Li Y, et al. Visual laser-SLAM in large-scale indoor environments[C]//2016 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2016: 19-24.

[43] Graeter J, Wilczynski A, Lauer M. Limo: Lidar-monocular visual odometry[C]//2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2018: 7872-7879.

[44] Chan S H, Wu P T, Fu L C. Robust 2D indoor localization through laser SLAM and visual SLAM fusion[C]//2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2018: 1263-1268.

[45] Zhu Z, Yang S, Dai H. Enhanced Visual Loop Closing for Laser-Based SLAM[C]//2018 IEEE 29th International Conference on Application-specific Systems, Architectures and Processors (ASAP). IEEE, 2018: 1-4.

[46] Shao W, Vijayarangan S, Li C, et al. Stereo visual inertial lidar simultaneous localization and mapping[J]. arXiv preprint arXiv:1902.10741, 2019.

[47] Wang W, Liu J, Wang C, et al. DV-LOAM: direct visual LiDAR Odometry and mapping[J].RemoteSensing,2021,13(16):3340.

[48] Zuo, X., Geneva, P., Lee, W, et al. LIC-Fusion: LiDAR-Inertial-Camera Odometry[C]//Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Macau, SAR, China.IEEE,2019:5848-5854.

[49] Lin, J., Zheng, C., Xu, W, et al. R2LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping[J].IEEE Robotics and Automation Letters, 2021,6(4):7469-7476.

[50] Lin, J., Zhang, F. R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package.[C] 2022 International Conference on Robotics and Automation (ICRA), 10672-10678.

[51] Huang, S., Ma, Z., Mu, T.,et al. Lidar-Monocular Visual Odometry using Point and Line Fe-atures[C].2020 IEEE International Conference on Robotics and Automation (ICRA),1091-1097.

[52] 张彦雯,胡凯,王鹏盛.三维重建算法研究综述[J].南京信息工程大学学报(自然科学版),2020,12(05):591-602.

[53] 郑太雄,黄帅,李永福,等.基于视觉的三维重建关键技术研究综述[J].自动化学报,2020,46(04).

[54] 刘永俊.面向机器人智能抓取的目标三维重建方法研究[D].沈阳:沈阳理工大学,2022.

[55] 王泽宇,向森,邓慧萍,等.基于多视点编码光场的全景三维重建方法[J/OL].激光与光电子学进展:1-18[2023-02-10].

[56] 吴天生.基于多视图立体视觉的三维重建算法研究[D].太原:山西财经大学,2022.

[57] R. Kataoka, R. Suzuki, Y. Ji, H. Fujii, H. Kono and K. Umeda, ICP-based SLAM Using LiDAR Intensity and Near-infrared Data, 2021 IEEE/SICE International Symposium on System Integration (SII), Iwaki, Fukushima, Japan, 2021, pp. 100-104.

[58] 赵一凡.基于3D激光雷达与IMU融合的室外移动机器人SLAM技术研究[D].济南:山东大学,2022.

[59] Newcombe R A, Davison A J, lzadi S, et al. Kinect Fusion: Real-time dense surface mapping and tracking[C]. Mixed and augmented reality (ISMAR), 10th IEEE international symposium, 2011:127-136.

[60] 刘继忠,王聪,曾成.基于RGB-D相机数据的SLAM算法研究[J].传感技术学报,2021,34(06):770-777.

[61] 杨彪,王卓.三维重建中的点云配准方法研究[J].计算机与数字工程,2014, 42(002):300-303.

[62] 王保岩.矿井巷道三维重建及安全监测技术研究[D].济南:山东大学,2018.

[63] 薛光辉,李瑞雪,张钲昊,等.基于激光雷达的煤矿井底车场地图融合构建方法研究[J/OL].煤炭科学技术:1-9[2023-02-10].

[64] 裴凌,李涛,花彤,等.多源融合定位算法综述[J].南京信息工程大学学报(自然科学版),2022,14(06):635-648.

[65] 邹筱瑜,黄鑫淼,王忠宾,等.基于集成式因子图优化的煤矿巷道移动机器人三维地图构建[J].工矿自动化,2022,48(12):57-67+92.

[66] 张保,张安思,梁国强,等.激光雷达室内定位技术研究及应用综述[J/OL].激光杂志:1-10[2023-02-11].

[67] Li Y, Ibanez-Guzman J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems[J]. IEEE Signal Processing Magazine, 2020, 37(4): 50–61.

[68] 胡志新,曹刘洋,裴东芳,等.自适应精简点云改进预处理优化三维重建算法研究[J/OL].激光与光电子学进展:1-9[2023-02-10].

[69] 胡春梅,费华杰,夏国芳,等.激光扫描与摄影测量异源点云高精度配准方法[J].激光与光电子学进展,2022,59(24):212-219.

[70] 陈亚超,樊彦国,禹定峰,樊博文.考虑法向离群的自适应双边滤波点云平滑及IMLS评价方法[J/OL].图学学报:1-9[2023-02-11].

[71] 何仲伟.融合视觉的改进LOAM智能汽车即时定位与地图构建算法研究[D].天津:河北工业大学,2021.

[72] A. Segal, D. Haehnel, S. Thrun, Generalized-ICP[C]. Robotics: Science and Systems, RSS, Seattle, USA, 2009: 26–27.

[73] Geiger A, Lenz P, Stiller C, et al. Vision meets robotics: The KITTI dataset[J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237.

[74] Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]// IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012: 3354-3361.

[75] Zhang, Zhengyou. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22 (2000): 1330-1334.

[76] 高翔,张涛等.视觉SLAM十四讲:从理论到实践[M].北京.电子工业出版社,2017:19-24

[77] 孙国庆.基于V-SLAM的室内三维稠密点云实时构建算法研究[D].天津:天津理工大学,2022.

[78] David G. Lowe.Distinctive Image Features from Scale-Invariant Keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110.

[79] Bay H, Ess A, Tuytelaars T, et al. Speeded-Up Robust Features (SURF)[J]. Computer Vision and Image Understanding, 2008, 110(3):346-359.

[80] Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alternative to SIFT or SURF[J]. IEEE International Conference on Computer Vision, 2012, 58(11):2564-2571.

[81] Rosten E. Machine learning for high-speed corner detection[C]. European Conference on Computer Vision. Springer-Verlag, 2006.

[82] Calonder M, Lepetit V, Strecha C, et al. BRIEF: Binary Robust Independent Elementary Features[C]. European Conference on Computer Vision. Springer, Berlin, Heidelberg, 2010.

[83] 成怡,朱伟康,徐国伟.基于余弦相似度的改进ORB匹配算法[J].天津工业大学学报,2021, 40(01):60-66.

[84] Lepetit V, Moreno-Noguer F, Fua P. Epnp: An accurate o (n) solution to the pnp problem[J].International journal of computer vision, 2009, 81(2): 155-166.

[85] Penate-Sanchez A, Andrade-Cetto J, Moreno-Noguer F. Exhaustive linearization for robustcamera pose and focal length estimation[J]. IEEE transactions on pattern analysis and machineintelligence, 2013, 35(10): 2387-2400

[86] QIN T, SHEN S. Online temporal calibration for monocular visual-inertial systems[C]// 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 3662-3669.

[87] 周韦,孙宪坤,吴飞.基于SLAM/UWB的室内融合定位算法研究[J].全球定位系统,2022,47(01):36-42+85.

[88] Zhang J, Singh S. Visual-lidar Odometry and Mapping: Low-drift, Robust, and Fast[J].2015,2015:2174-2181.

[89] CADENA C, CARLONE L, CARRILLO H, et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age [J]. IEEE Transactions on robotics, 2016, 32(6): 1309-1332.

[90] 陈文.基于深度相机的移动机器人SLAM研究[D].重庆:重庆邮电大学,2019.

[91] G LVEZ-L PEZ D, TARDOS J D. Bags of binary words for fast place recognition in image sequences [J]. IEEE Transactions on Robotics, 2012, 28(5): 1188-1197.

[92] Sivic, Zisserman. Video Google: A Text Retrieval Approach to Object Matching in Videos[J]. IEEE, 2003.

[93] Robertson S. Understanding inverse document frequency: On theoretical arguments for IDF[J].Journal of Documentation,2004,60(5):503-520.

[94] W. Wen et al., "UrbanLoco: A Full Sensor Suite Dataset for Mapping and Localization in Urban Scenes," 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 2310-2316.

中图分类号:

 TD421    

开放日期:

 2024-06-14    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式