- 无标题文档
查看论文信息

论文中文题名:

 地下空间多源传感器数据融合SLAM方法研究    

姓名:

 杨鑫    

学号:

 21210226065    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 085700    

学科名称:

 工学 - 资源与环境    

学生类型:

 硕士    

学位级别:

 工学硕士    

学位年度:

 2024    

培养单位:

 西安科技大学    

院系:

 测绘科学与技术学院    

专业:

 测绘工程    

研究方向:

 多源融合SLAM    

第一导师姓名:

 姚顽强    

第一导师单位:

 西安科技大学    

论文提交日期:

 2024-06-13    

论文答辩日期:

 2024-06-03    

论文外文题名:

 SLAM method for multi-source sensor data fusion in underground space    

论文中文关键词:

 地下空进啊 ; 同时定位与建图 ; 多源传感器数据融合 ; 图像增强 ; 智能感知    

论文外文关键词:

 underground space ; SLAM ; multi-source sensor data fusion ; image enhancement ; intelligent perception    

论文中文摘要:

城市发展既要“广度”,也要“深度”。向地下要空间已成为城市发展的必然,并成为衡量城市现代化的重要标志之一。国务院发布《“十四五”全国城市基础设施建设规划》,明确提出以“统筹地上、地下空间利用”的手段来转变城市发展方式,推动我国对地下空间的智能开发与利用。基于同时定位与建图(Simultaneous Localization and Mapping,SLAM)技术的移动机器人能够进行智能感知和避障,并应用于复杂地下空间救援、自主探测、自动巡检、无人运输等,已成为地下空间智能开发与利用亟待攻克的关键技术之一。然而,地下空间环境复杂、工况恶劣,全球导航卫星系统(Global Navigation Satellite System,GNSS)信号拒止,导致当前单一传感器SLAM技术不能满足智能感知精度及可靠性的需求。融合多源观测数据的SLAM技术可充分发挥不同类型传感器的优势,具有更高的精度和鲁棒性。因此,本文根据地下空间智能感知的实际需求,围绕移动机器人精确定位与地图构建这两个关键任务展开研究,主要研究工作及贡献如下:

(1)基于激光雷达(Light Detection and Ranging,LiDAR)的SLAM易在狭长空间或自相似场景中退化,无法为LiDAR SLAM提供足够的几何约束。为此,本文提出了一种LiDAR和惯性测量单元(Inertial Measurement Unit,IMU)融合的SLAM方法。使用扰动模型来检测LiDAR位姿估计退化的方向和程度,将退化状态分为旋转和平移两种形式,利用IMU状态以投影和融合的方式进行退化补偿,再基于滑动窗口因子图优化实现全局一致的SLAM。在地下通道和室内走廊等典型地下空间中进行了实验,并与LeGO-LOAM、LIO-SAM等主流方法进行定性、定量对比分析。结果表明:本文方法定位轨迹距离误差百分比小于0.66%,在几何约束较少的退化场景中具有较高的定位精度。

(2)地下空间环境复杂,半结构化、弱纹理、光照不均等使得多源传感器极易出现退化现象,从而导致多源传感器数据融合SLAM精度不足,甚至失效。在上一研究工作的基础上,进一步研究增强鲁棒性的多源传感器数据动态加权融合SLAM方法。在视觉图像部分增加图像增强算法,以增强地下空间图像的亮度和对比度。利用马氏距离一致性检测方法来评估传感器数据质量,自适应选择当前环境的传感器数据进行融合。然后,根据传感器数据质量构建多源传感器数据权重动态组合模型,动态地调整各传感器数据融合因子的权重,从而提高SLAM的精度和鲁棒性。为验证本文方法的效果,在地下车库、地铁隧道、煤矿井下等典型地下空间进行了实验,并与多种主流SLAM方法进行定性、定量对比分析。结果表明:本文方法可以在复杂地下空间中实时稳健地运行,其轨迹最大均方根误差(Root Mean Square Error,RMSE)仅为0.19 m。以高精度地面三维激光扫描获取的点云为参考,平均点云直接距离比较(Cloud to Cloud,C2C)小于0.13 m,所构建的点云地图具有较好的全局一致性和几何结构真实性,验证了本文方法在地下空间具有较高的精度和鲁棒性。

论文外文摘要:

Urban development requires both breadth and depth. Seeking underground space has become an inevitable trend in urban development and an important indicator of urban modernization. The State Council has released the "14th Five-Year Plan for National Urban Infrastructure Construction", which proposes to transform the urban development mode by "coordinating the utilization of above ground and underground spaces" and promoting the intelligent development and utilization of underground spaces in China. Mobile robots based on Simultaneous Localization and Mapping (SLAM) technology can intelligently perceive and avoid obstacles and are applied in complex underground space rescue, autonomous detection, automatic inspection, unmanned transportation, etc. It has become a key technology that urgently needs to be overcome in the intelligent development and utilization of underground space. However, the underground space environment is complex. The working conditions are harsh, and the Global Navigation Satellite System (GNSS) signal is rejected, resulting in the current single-sensor SLAM technology failing to meet the requirements of intelligent perception accuracy and reliability. The SLAM technology that integrates multi-source observation data can fully leverage the advantages of different types of sensors with higher accuracy and robustness. Therefore, based on the actual needs of intelligent perception in underground spaces, this article focuses on the two key tasks of precise positioning of mobile robots and map construction. The main research work and contributions are as follows:

(1) SLAM based on Light Detection and Ranging (LiDAR) is prone to degradation in semi-structured narrow spaces or similar scenes and cannot provide sufficient geometric constraints for LiDAR SLAM. Therefore, this article proposes a SLAM method integrating the Inertial Measurement Unit (IMU) and LiDAR. A perturbation model is used to detect the direction and degree of degradation in LiDAR pose estimation, and the degradation state is divided into two forms: rotation and translation. The IMU pre-integration state is used to compensate for degradation through projection and fusion, and then globally consistent SLAM is achieved through sliding window factor graph optimization. Experiments were conducted in typical underground spaces such as underground passages and indoor corridors, and qualitative and quantitative comparative analysis was conducted with mainstream methods such as LeGO-LOAM and LIO-SAM. The results show that the distance error percentage of the localization trajectory in this method is less than 0.66%, and it has high localization accuracy in degraded scenes with fewer geometric constraints.

(2) The complex underground space environment, semi-structured, weak texture, and uneven lighting make multi-source sensors prone to degradation, resulting in insufficient accuracy and even failure of SLAM data fusion for multi-source sensors. Based on the previous research, further research is conducted on the dynamic weighted fusion SLAM method for enhancing the robustness of multi-source sensor data. Add image enhancement algorithms in the visual image section to enhance the brightness and contrast of underground space images. Evaluate the quality of sensor data through the Mahalanobis distance consistency detection method and adaptively select sensor data suitable for the current environment for fusion. Then, based on the quality of sensor data, a dynamic combination model of multi-source sensor data weights is constructed, and the weights of each sensor data fusion factor are dynamically adjusted to improve the accuracy and robustness of SLAM. To verify the effectiveness of the method proposed in this article, experiments were conducted in typical underground spaces such as underground garages, subway tunnels, and coal mines, and qualitative and quantitative comparative analysis was conducted with various mainstream SLAM methods. The results show that the proposed method can operate in complex underground spaces in real-time and robustly, with a with a maximum Root Mean Square Error (RMSE) of only 0.19 meters for its trajectory of only 0.19 m. Taking the point cloud obtained by high-precision ground 3D laser scanning as a reference, the average Cloud to Cloud (C2C) distance is less than 0.13 m. The constructed point cloud map has good global consistency and geometric structure authenticity, further verifying the high accuracy and robustness of the method proposed in underground space.

参考文献:

程光华, 王睿, 赵牧华 等. 国内城市地下空间开发利用现状与发展趋势[J]. 地学前缘, 2019, 26(03): 39-47.

[2] 刘美芝. 我国城市地下空间开发利用政策法规研究[J]. 工程质量, 2011, 29(03): 3-7.

[3] 陈锐志, 王磊, 李德仁 等. 导航与遥感技术融合综述[J]. 测绘学报, 2019, 48(12): 1507-1522.

[4] 李德仁. 展望5G/6G时代的地球空间信息技术[J]. 测绘学报, 2019, 48(12): 1475-1481.

[5] 张永军, 张祖勋, 龚健雅. 天空地多源遥感数据的广义摄影测量学[J]. 测绘学报, 2021, 50(01): 1-11.

[6] Huang K, Shi B, Li X, et al. Multi-modal Sensor Fusion for Auto Driving Perception: A Survey [J]. arXiv preprint arXiv:2202.02703, 2022.

[7] 白师宇, 赖际舟, 吕品 等. 面向地下空间探测的移动机器人定位与感知方法[J]. 机器人, 2022, 44(04): 463-470.

[8] 马艾强, 姚顽强, 蔺小虎 等. 面向煤矿巷道环境的LiDAR与IMU融合定位与建图方法[J]. 工矿自动化, 2022, 48(12): 49-56.

[9] Tranzatto M, Miki T, Dharmadhikari M, et al. CERBERUS in the DARPA Subterranean Challenge[J]. Science Robotics, 2022, 7(66): eabp9742.

[10] Best G, Garg R, Keller J, et al. Resilient multi-sensor exploration of multifarious environments with a team of aerial robots[C]//Robotics: Science and Systems (RSS). 2022.

[11] EBADI K, BERNREITER L, BIGGIE H, et al. Present and Future of SLAM in Extreme Environments: The DARPA SubT Challenge[J]. IEEE Transactions on Robotics, 2023.

[12] 高毅楠, 姚顽强, 蔺小虎 等. 煤矿井下多重约束的视觉SLAM关键帧选取方法[J/OL]. 煤炭学报, 1-14[2024-04-03]. 2023.0519.

[13] Lin X, Yang B, Wang F, et al. Dense 3D surface reconstruction of large-scale streetscape from vehicle-borne imagery and LiDAR[J]. International Journal of Digital Earth, 2021, 14(5): 619-639.

[14] 王铉彬, 李星星, 廖健驰 等. 基于图优化的紧耦合双目视觉/惯性/激光雷达SLAM方法[J]. 测绘学报, 2022, 51(08): 1744-1756.

[15] Best G, Garg R, Keller J, et al. Resilient multi-sensor exploration of multifarious environments with a team of aerial robots[C]//Robotics: Science and Systems (RSS). 2022.

[16] Li S, Li X, Wang H, et al. Multi-GNSS PPP/INS/Vision/LiDAR tightly integrated system for precise navigation in urban environments[J]. Information Fusion, 2023, 90: 218-232.

[17] Sizintsev M, Rajvanshi A, Chiu H P, et al. Multi-sensor fusion for motion estimation in visually-degraded environments[C]//2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2019: 7-14.

[18] 李帅鑫, 李九人, 田滨 等. 面向点云退化的隧道环境的无人车激光SLAM方法[J]. 测绘学报, 2021, 50(11): 1487-1499.

[19] Yang X, Lin X, Yao W, et al. A Robust LiDAR SLAM Method for Underground Coal Mine Robot with Degenerated Scene Compensation[J]. Remote Sensing, 2022, 15(1): 186.

[20] Ebadi K, Chang Y, Palieri M, et al. LAMP: Large-scale autonomous mapping and positioning for exploration of perceptually-degraded subterranean environments[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020: 80-86.

[21] Chang Y, Ebadi K, Denniston C E, et al. LAMP 2.0: A robust multi-robot SLAM system for operation in challenging large-scale underground environments[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 9175-9182.

[22] Gohl P, Burri M, Omari S, et al. Towards autonomous mine inspection[C]//Proceedings of the 2014 3rd International Conference on Applied Robotics for the Power Industry. IEEE, 2014: 1-6.

[23] 王保兵, 王凯, 王丹丹 等. 地下复杂空间无人机研究进展及其面临的挑战[J]. 工矿自动化, 2023, 49(07): 6-13, 48.

[24] Smith R C, Cheeseman P. On the representation and estimation of spatial uncertainty[J]. The international journal of Robotics Research, 1986, 5(4): 56-68.

[25] Covolan J P M, Sementille A C, Sanches S R R. A mapping of visual SLAM algorithms and their applications in augmented reality[C]//2020 22nd Symposium on Virtual and Augmented Reality (SVR). IEEE, 2020: 20-29.

[26] Liu C, Zhou C, Cao W, et al. A novel design and implementation of autonomous robotic car based on ROS in indoor scenario[J]. Robotics, 2020, 9(1): 19.

[27] Dworakowski D, Thompson C, Pham-Hung M, et al. A robot architecture using contextslam to find products in unknown crowded retail environments[J]. Robotics, 2021, 10(4): 110.

[28] Montemerlo M, Thrun S, Koller D, et al. FastSLAM: A factored solution to the simultaneous localization and mapping problem[J]. Aaai/iaai, 2002, 593598.

[29] Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping with rao-blackwellized particle filters[J]. IEEE transactions on Robotics, 2007, 23(1): 34-46.

[30] Kohlbrecher S, Von Stryk O, Meyer J, et al. A flexible and scalable SLAM system with full 3D motion estimation[C]//2011 IEEE international symposium on safety, security, and rescue robotics. IEEE, 2011: 155-160.

[31] Hess W, Kohler D, Rapp H, et al. Real-time loop closure in 2D LIDAR SLAM[C]//2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016: 1271-1278.

[32] Macenski S, Jambrecic I. SLAM Toolbox: SLAM for the dynamic world[J]. Journal of Open Source Software, 2021, 6(61): 2783.

[33] Tee Y K, Han Y C. Lidar-based 2D SLAM for mobile robot in an indoor environment: A review[C]//2021 International Conference on Green Energy, Computing and Sustainable Technology (GECOST). IEEE, 2021: 1-7.

[34] Zhang J, Singh S. LOAM: Lidar odometry and mapping in real-time[C]//Robotics: Science and systems. 2014, 2(9): 1-9.

[35] Shan T, Englot B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 4758-4765.

[36] Qin C, Ye H, Pranata C E, et al. Lins: A lidar-inertial state estimator for robust and efficient navigation[C]//2020 IEEE international conference on robotics and automation (ICRA). IEEE, 2020: 8899-8906.

[37] Shan T, Englot B, Meyers D, et al. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping[C]//2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2020: 5135-5142.

[38] Guo H, Zhu J, Chen Y. E-LOAM: LiDAR odometry and mapping with expanded local structural information[J]. IEEE Transactions on Intelligent Vehicles, 2022, 8(2): 1911-1921.

[39] Wang Z, Zhang L, Shen Y, et al. D-liom: Tightly-coupled direct lidar-inertial odometry and mapping[J]. IEEE Transactions on Multimedia, 2022.

[40] He D, Xu W, Chen N, et al. Point‐LIO: Robust High‐Bandwidth Light Detection and Ranging Inertial Odometry[J]. Advanced Intelligent Systems, 2023: 2200459.

[41] Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE transactions on pattern analysis and machine intelligence, 2007, 29(6): 1052-1067.

[42] Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//2007 6th IEEE and ACM international symposium on mixed and augmented reality. IEEE, 2007: 225-234.

[43] Stanimirovic D, Kusuma A Y, Anwar S. RGB-D Mapping: Using Depth Cameras for 3D Modeling of Indoor Environments[J]. Erasmus Mundus MSc in Vision and Robotics 3D digitization, Indoor mapping using Kinect January, 2012.

[44] Ng P C, Henikoff S. SIFT: Predicting amino acid changes that affect protein function[J]. Nucleic acids research, 2003, 31(13): 3812-3814.

[45] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE transactions on robotics, 2015, 31(5): 1147-1163.

[46] Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE transactions on robotics, 2017, 33(5): 1255-1262.

[47] Campos C, Elvira R, Rodríguez J J G, et al. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.

[48] Newcombe R A, Lovegrove S J, Davison A J. DTAM: Dense tracking and mapping in real-time[C]//2011 international conference on computer vision. IEEE, 2011: 2320-2327.

[49] Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Cham: Springer International Publishing, 2014: 834-849.

[50] Zubizarreta J, Aguinaga I, Montiel J M M. Direct sparse mapping[J]. IEEE Transactions on Robotics, 2020, 36(4): 1363-1370.

[51] Forster C, Zhang Z, Gassner M, et al. SVO: Semidirect visual odometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, 2016, 33(2): 249-265.

[52] Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.

[53] 顾恺琦, 刘晓平, 王刚 等. 基于在线光度标定的半直接视觉SLAM算法[J]. 机器人, 2022, 44(06): 672-681.

[54] Kalman R E, Bucy R S. New Results in Linear Filtering and Prediction Theory [J]. Transactions of the ASME-Journal of Basic Engineering, 1961, 83 (Series D): 95-108.

[55] Bell B M, Cathey F W. The iterated Kalman filter update as a Gauss-Newton method [J]. IEEE Transactions on Automatic Control, 1993, 38(2):294-297.

[56] Huang G, Truax R, Kaess M, et al. Unscented iSAM: A consistent incremental solution to cooperative localization and target tracking[C]//European Conference on Mobile Robots, 2013:248-254.

[57] Pfeifer T, Protzel P. Robust Sensor Fusion with Self-tuning Mixture Models [C]// IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018:3678-3685.

[58] Zuo X, Geneva P, Lee W, et al. Lic-fusion: Lidar-inertial-camera odometry[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 5848-5854.

[59] 石金龙, 马宏伟, 毛清华, 等. “惯导+里程计”的采煤机定位方法研究[J]. 煤炭工程, 2021, 53(10): 143-147.

[60] 葛世荣, 王世佳, 曹波 等. 智能采运机组自主定位原理与技术[J]. 煤炭学报, 2022, 47(1): 75-86.

[61] 赵玏洋, 闫利. 移动机器人SLAM位姿估计的改进四元数无迹卡尔曼滤波[J]. 测绘学报, 2022, 51(02): 212-223.

[62] Indelman V, Williams S, Kaess M, et al. Information Fusion in Navigation Systems via Factor Graph Based Incremental Smoothing [J]. Robotics & Autonomous Systems, 2013, 61(8):721-738.

[63] Wen W, Bai X, Kan Y C, et al. Tightly Coupled GNSS/INS Integration via Factor Graph and Aided by Fish-eye Camera [J]. IEEE Transactions on Vehicular Technology, 2019, (99):1-1.

[64] Mikhail S, Abhinav R, Han-Pang C, et al. Multi-Sensor Fusion for Motion Estimation in Visually-Degraded Environments[C]//IEEE International Symposium on Safety, Security, and Rescue Robotics, 2019:7-14.

[65] Zhang J, Singh S. Laser-visual-inertial Odometry and Mapping with High Robustness and Low Drift [J]. Journal of Field Robotics, 2018, 35(8): 1242-1264.

[66] Shao W, Vijayarangan S, Li C, et al. Stereo Visual Inertial LiDAR Simultaneous Localization and Mapping[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019:370-377.

[67] Wang Z, Zhang J, Chen S, et al. Robust high accuracy visual-inertial-laser slam system[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 6636-6641.

[68] 蔺小虎. 城市复杂环境下Visual/INS/LiDAR融合高精度位姿估计方法研究[J]. 测绘学报, 2023, 52(05): 864.

[69] Koide K, Yokozuka M, Oishi S, et al. Globally Consistent and Tightly Coupled 3D LiDAR Inertial Mapping[C]//International Conference on Robotics and Automation (ICRA), arXiv preprint arXiv:2202.00242, 2022.

[70] 刘帅, 孙付平, 陈坡 等. GPS/INS组合导航系统时间同步方法综述[J]. 全球定位系统, 2012, 37(01): 53-56.

[71] 周阳林. 视觉传感器辅助的高精度组合导航定位技术研究[D]. 郑州: 战略支援部队信息工程大学, 2018.

[72] 李帅鑫. 激光雷达/相机组合的3D SLAM技术研究[D]. 郑州: 战略支援部队信息工程大学, 2018.

[73] Shin E H. Estimation techniques for low-cost inertial navigation[J]. 2005.

[74] 刘春辉. 卡尔曼滤波在GNSS导航系统中的应用[D]. 北京: 北京邮电大学, 2013.

[75] Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.

[76] Bogoslavskyi I, Stachniss C. Efficient online segmentation for sparse 3D laser scans[J]. PFG–Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 2017, 85: 41-52.

[77] Zhang J, Kaess M, Singh S. On degeneracy of optimization-based state estimation problems[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 809-816.

[78] Forster C, Carlone L, Dellaert F, et al. On-manifold preintegration for real-time visual-inertial odometry[J]. IEEE Transactions on Robotics, 2016, 33(1): 1-21.

[79] Autoware. Autoware calibration toolkit [EB/OL]. [2021-08-20]. https://github.com/autowarefoundation/autoware.ai/autoware.

[80] Furgale P, Rehder J, Siegwart R. Unified temporal and spatial calibration for multi-sensor systems[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013: 1280-1286.

[81] Di S, Sun W. Research on low illumination image processing algorithm based on adaptive parameter homomorphic filtering[C]//2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML). IEEE, 2022: 681-685.

[82] Hana F M, Maulida I D. Analysis of contrast limited adaptive histogram equalization (CLAHE) parameters on finger knuckle print identification[C]//Journal of Physics: Conference Series. IOP Publishing, 2021, 1764(1): 012049.

[83] Shi J. Good features to track[C]//1994 Proceedings of IEEE conference on computer vision and pattern recognition. IEEE, 1994: 593-600.

[84] Lepetit V, Moreno-Noguer F, Fua P. EP n P: An accurate O (n) solution to the P n P problem[J]. International journal of computer vision, 2009, 81: 155-166.

[85] Hackett J K, Shah M. Multi-sensor fusion: a perspective[C]//Proceedings. IEEE International Conference on Robotics and Automation. IEEE, 1990: 1324-1330.

[86] Christian H, Agus M P, Suhartono D. Single document automatic text summarization using term frequency-inverse document frequency (TF-IDF)[J]. ComTech: Computer, Mathematics and Engineering Applications, 2016, 7(4): 285-294.

[87] Gálvez-López D, Tardos J D. Bags of binary words for fast place recognition in image sequences[J]. IEEE Transactions on Robotics, 2012, 28(5): 1188-1197.

[88] Calonder M, Lepetit V, Strecha C, et al. Brief: Binary robust independent elementary features[C]//Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV 11. Springer Berlin Heidelberg, 2010: 778-792.

[89] 蔺小虎. 城市复杂环境下Visual/INS/LiDAR融合高精度位姿估计方法研究[D]. 武汉: 武汉大学, 2021.

[90] Wang F, Zhang B, Zhang C, et al. Low-light image joint enhancement optimization algorithm based on frame accumulation and multi-scale Retinex[J]. Ad Hoc Networks, 2021, 113: 102398.

[91] Dong S, Ma J, Su Z, et al. Robust circular marker localization under non-uniform illuminations based on homomorphic filtering[J]. Measurement, 2021, 170: 108700.

中图分类号:

 T237    

开放日期:

 2024-06-13    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式