- 无标题文档
查看论文信息

题名:

 煤矿井下移动机器人视觉SLAM关键技术研究    

作者:

 高毅楠    

学号:

 21210061039    

保密级别:

 保密(4年后开放)    

语种:

 chi    

学科代码:

 081601    

学科:

 工学 - 测绘科学与技术 - 大地测量学与测量工程    

学生类型:

 硕士    

学位:

 工学硕士    

学位年度:

 2024    

学校:

 西安科技大学    

院系:

 测绘科学与技术学院    

专业:

 测绘科学与技术    

研究方向:

 视觉SLAM    

导师姓名:

 姚顽强    

导师单位:

 西安科技大学    

提交日期:

 2024-06-13    

答辩日期:

 2024-06-03    

外文题名:

 Research on the Key Technologies of Visual SLAM for Underground Mobile Robots in Coal Mines    

关键词:

 煤矿智能化 ; 移动机器人 ; 视觉SLAM ; 关键帧选取 ; 稠密建图    

外文关键词:

 Intelligent Underground Coal Mine ; Mobile Robot ; Visual SLAM ; Keyframe Selection ; Dense Reconstruction    

摘要:

近年来,国家能源安全战略持续升级,智能化技术与煤炭企业深度融合,煤矿智能化成为煤炭企业高质量发展的技术支撑和迫切需求。移动机器人是实现煤矿智能化和无人化的关键,对提升煤矿企业安全性、减轻矿工劳动负担和实现高效管理具有划时代意义。视觉同时定位与建图(Visual Simultaneous Localization and Mapping,VSLAM)凭借高帧率、低成本、色彩信息丰富以及在GNSS信号拒止环境下的出色表现,成为煤矿井下移动机器人实现精确定位与导航的关键技术。然而,面对煤矿井下空间狭小、能见度低、纹理信息稀缺的恶劣环境,现有视觉SLAM难以达到理想的定位精度和建图效果。为此,本文聚焦于提升煤矿井下移动机器人视觉SLAM精度和鲁棒性,研究煤矿井下图像增强算法、关键帧选取方法以及三维稠密建图,旨在推动煤矿企业向智能化和无人化方向发展。论文主要研究内容总结如下:

(1)煤矿井下低照度使得双目立体匹配面临着极大的挑战,因此,本文在双目立体匹配之前需要对图像质量进行增强。本文提出基于双边滤波的Retinex图像增强算法,首先,将输入的RGB(Red,Green,Blue)图像转换至HSI(Hue,Saturation,Intensity)色彩空间,以便更好地处理图像的色相、饱和度和亮度信息。其次,采用双边滤波函数替代传统Retinex算法中的高斯核函数,估计更加精准的反射分量。最后,经过处理后的图像再从HSI色彩空间转换到RGB色彩空间,得到对比度显著提升且亮度分布均匀的增强图像。通过实验分析,相较于Retinex算法和多尺度Retinex算法,本文算法处理后的图像未出现过亮及光晕现象,图像质量得到了明显提升。

(2)针对现有依赖启发式阈值进行关键帧选取方法无法满足煤矿井下视觉SLAM的有效性和定位精度,本文提出多重约束的视觉SLAM关键帧选取方法。采用几何结构约束的自适应阈值取代静态启发式阈值进行关键帧选取,实现了关键帧选取方法的有效性;通过重心平衡原则对有效特征点分布进行均匀化处理,确保关键帧选取方法的稳定性以及创建地图点的稠密性;利用航向角阈值对转向处做进一步约束,降低视角突变对关键帧选取精度的影响。为了验证该方法的效果,自主设计集成了移动机器人数据采集平台,在室内场景和煤矿井下分别进行实验,并比于启发式视觉SLAM关键帧选取方法,本文提出的方法在室内场景和煤矿井下的RMSE分别提升了32%和37%,具有较高的定位精度和鲁棒性。

(3)视觉SLAM构建的稀疏点云地图未能完整表达煤矿井下场景信息,如井下地形、障碍物、设备布局等,无法为煤矿井下导航、路径规划、避障等任务提供有力支持。稠密点云地图通过构建更多的环境细节,能够提供丰富的场景信息,为煤矿井下的各类应用提供数据支撑。为此,本文基于ORB-SLAM3框架,提出了煤矿井下关键帧图像快速稠密建图方法。采用ELAS算法估计关键帧视觉点云,通过估计的位姿实现局部点云数据融合,全局光束法平差(Bundle Adjustment,BA)对整体点云进行优化。煤矿井下实验表明:所构建的点云地图具有较好的全局一致性和几何结构真实性,验证了本文方法的有效性。

外文摘要:

 

In recent years, the implementation of the national energy security strategy has been upgraded, the intelligent technology has been deeply integrated with the coal enterprises, and coal mine intelligence has become the technical support and urgent demand for the high-quality development of coal enterprises. The mobile robot is the key to realize coal mine intelligence and unmanned operations, which is of epoch-making significance in enhancing the safety of coal mining enterprises, reducing the labor burden of miners and realizing efficient management. Visual Simultaneous Localization and Mapping (VSLAM), with its high frame rate, low cost, rich color information and excellent performance in GNSS signal rejection environments, has become the key technology to achieve accurate positioning and navigation of mobile robots in underground coal mines. However, in the context of the harsh environment of narrow spaces, low visibility and scarce texture information in underground coal mines, the existing visual SLAM is unable to achieve the ideal positioning accuracy and map building effect. This paper therefore focuses on improving the accuracy and robustness of VSLAM for mobile robots in underground coal mines. It also examines image enhancement algorithms, key frame selection methods, and 3D dense map building in underground coal mines. The aim is to promote the development of coal mining enterprises in the direction of intelligence and unmanned development. The main research content of the thesis is summarized as follows:

(1) The existing key frame selection method, which relies on heuristic thresholds, is unable to achieve the effectiveness and positioning accuracy required for VSLAM in underground coal mines. This paper proposes a multiple-constraint VSLAM key frame selection method to enable real-time robust position estimation for mobile robots in underground coal mines. Adaptive thresholds based on geometric structure constraints are employed to supplant static heuristic thresholds for key frame selection, thereby achieving the efficacy and resilience of key frame selection method. The distribution of effective feature points is homogenized by the principle of center of gravity balance to guarantee stability. Furthermore, heading angle thresholds are employed to further constrain the steering position and reduce the impact of sudden changes in perspective on key frame selection and the accuracy of key frames. The impact of sudden changes in view angle on the accuracy of key frame selection is reduced. The experiments are conducted in indoor scenarios and underground coal mines using the autonomous integrated mobile robot data acquisition platform. Quantitative and qualitative evaluations are conducted in terms of Absolute Pose Error and Root Mean Square Error. The experimental results demonstrate that the proposed method outperforms the heuristic visual SLAM keyframe selection method in terms of RMSE, with improvements of 32% and 37% observed in indoor and underground coal mine scenes, respectively. Furthermore, the method exhibits high localization accuracy and robustness.

(2) This paper proposes a multiple-constraint VSLAM key frame selection method as an alternative to the existing key frame selection method relying on heuristic thresholds, which has been demonstrated to be ineffective and inaccurate in the context of visual SLAM in underground coal mines. Adaptive thresholds with geometric structure constraints are employed to supplant static heuristic thresholds for key frame selection, thereby achieving the efficacy of key frame selection. The distribution of effective feature points is homogenized by the principle of center of gravity balance, which ensures the stability of the method. The key frame selection method is also constrained by the density of the created map points, and further constraints are imposed on the steering place by using heading angle thresholds, which reduces the effect of sudden changes in perspective on the accuracy of key frame selection. In order to verify the effectiveness of the method, a mobile robot data acquisition platform is designed and integrated independently, and experiments are carried out in indoor scenes and coal mine underground, respectively. These are then compared with the heuristic VSLAM key frame selection method. The results demonstrate that the method proposed in this paper improves the RMSE in indoor scenes and coal mine underground by 32% and 37%, respectively. Furthermore, it has high positioning accuracy and robustness.

(3) The sparse point cloud map constructed by VSLAM is inadequate for fully representing the underground coal mine scene information, including underground terrain, obstacles, and equipment layout. Consequently, it is unable to provide sufficient support for navigation, path planning, and obstacle avoidance in the underground coal mine. By incorporating more environmental details, the dense point cloud map can provide richer scene information and serve as a valuable data source for a range of applications in coal mine underground environments. This paper proposes a fast dense map building method for key frame images of coal mine underground based on the ORB-SLAM3 framework. The ELAS algorithm is used to estimate the visual point cloud of key frames, the local point cloud data fusion is achieved by the estimated bitmap, and the global beam method levelling Bundle Adjustment (BA) is used to optimize the overall point cloud. Experiments conducted in a coal mine underground have demonstrated that the constructed point cloud map exhibits satisfactory global consistency and geometric structure realism, thereby corroborating the efficacy of the methodology proposed in this paper.

参考文献:

[1] 葛世荣, 樊静丽, 刘淑琴等. 低碳化现代煤基能源技术体系及开发战略[J/OL]. 煤炭学报:1-26[2024-02-28].

[2] 葛世荣, 胡而已, 李允旺. 煤矿机器人技术新进展及新方向[J]. 煤炭学报, 2023, 48(01): 54-73.

[3] 陈国良, 时洪涛, 汪云甲等. 矿山地质环境“天—空—地—人”协同监测与多要素智能感知[J]. 金属矿山, 2023 (01): 9-16.

[4] 王国法, 庞义辉, 许永祥等. 厚煤层智能绿色高效开采技术与装备研发进展[J]. 采矿与安全工程学报, 2023, 40(05): 882-893.

[5] 贺飞, 鲁义强, 代恩虎等. 煤矿岩巷TBM适应性与新技术发展[J]. 煤炭科学技术, 2023, 51(S1): 351-361.

[6] 王国法. 煤矿智能化最新技术进展与问题探讨[J]. 煤炭科学技术, 2022, 50(01):1-27.

[7] Moo C S, Ng K S, Hsieh Y C. Parallel operation of battery power modules[J]. IEEE Transactions on Energy Conversion, 2008, 23(2): 701-707.

[8] 马宏伟, 孙思雅, 王川伟等. 多机械臂多钻机协作的煤矿巷道钻锚机器人关键技术[J]. 煤炭学报, 2023, 48(01): 497-509.

[9] 杨春雨, 张鑫. 煤矿机器人环境感知与路径规划关键技术[J]. 煤炭学报, 2022, 47(07): 2844-2872.

[10] Yang X, Lin X, Yao W, et al. A robust LiDAR SLAM method for underground coal mine robot with degenerated scene compensation[J]. Remote Sensing, 2022, 15(1): 186.

[11] 张旭辉, 杨文娟, 薛旭升等. 煤矿远程智能掘进面临的挑战与研究进展[J]. 煤炭学报, 2022, 47(01): 579-597.

[12] 杨必胜, 陈一平, 邹勤. 从大模型看测绘时空信息智能处理的机遇和挑战[J]. 武汉大学学报(信息科学版), 2023, 48(11): 1756-1768.

[13] 廖志伟, 杨真, 贺晓峰等. 煤矿井下机器人研发应用现状及发展趋势研究[J]. 中国煤炭, 2023, 49(S2): 13-23.

[14] Chatila R, Laumond J. Position referencing and consistent world modeling for mobile robots[C]//Proceedings. 1985 IEEE International Conference on Robotics and Automation. IEEE, 1985, 2: 138-145.

[15] Smith R C, Cheeseman P. On the representation and estimation of spatial uncertainty[J]. The international journal of Robotics Research, 1986, 5(4): 56-68.

[16] Durrant-Whyte H, Rye D, Nebot E. Localization of autonomous guided vehicles[C]// Robotics Research: The Seventh International Symposium. London: Springer London, 1996: 613-625.

[17] Wang X, Li T, Sun S, et al. A survey of recent advances in particle filters and remaining challenges for multitarget tracking[J]. Sensors, 2017, 17(12): 2707.

[18] 孙海波, 童紫原, 唐守锋等. 基于卡尔曼滤波与粒子滤波的SLAM研究综述[J]. 软件导刊, 2018, 17(12): 1-3+7.

[19] 邸凯昌, 万文辉, 赵红颖等. 视觉SLAM技术的进展与应用[J]. 测绘学报, 2018, 47(06): 770-779.

[20] 高翔, 张涛, 刘毅, 等. 视觉SLAM十四讲:从理论到实践[M]. 北京:电子工业出版社, 2019:153-154, 239-242, 365-366.

[21] Smith R, Self M, Cheeseman P. Estimating Uncertain Spatial Relationships in Robotics[J]. Machine Intelligence & Pattern Recognition, 1988, 5(5): 435-461.

[22] Holmes S, Klein G, Murray D W. A Square Root Unscented Kalman Filter for visual mono SLAM[C]. IEEE International Conference on Robotics and Automation. IEEE, 2008: 3710- 3716.

[23] Dellaert F, Fox D, Burgard W, et al. Monte Carlo localization for mobile robots[C]. IEEE International Conference on Robotics and Automation, 1999: 1322-1328 vol.2.

[24] Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE transactions on pattern analysis and machine intelligence, 2007, 29(6): 1052-1067.

[25] Klein G, Murray D. Parallel Tracking and Mapping for Small AR Workspaces[C]// IEEE & Acm International Symposium on Mixed & Augmented Reality. ACM, 2008.

[26] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE transactions on robotics, 2015, 31(5): 1147-1163.

[27] Mur-Artal R, JD Tardós. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras[J]. IEEE Transactions on Robotics, 2017.

[28] Zuo X, Xie X, Liu Y, etc. Robust visual SLAM with point and line features[C]// Proceedings of International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2017, 1775-1782.

[29] Pumarola A, Vakhitov A, Agudo A, etc. PL-SLAM: real-time monocular visual SLAM with points and lines[C]// 2017 IEEE International Conference on Robotics & Automation (ICRA). Singapore: IEEE, 2017, 4503-4508.

[30] Newcombe R A, Lovegrove S J, Davison A J. DTAM: Dense tracking and mapping in real-time[C]//2011 international conference on computer vision. IEEE, 2011: 2320-2327.

[31] Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]// 2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014: 15-22.

[32] Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]// European conference on computer vision. Cham: Springer International Publishing, 2014: 834-849.

[33] Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.

[34] Mourikis A I , Roumeliotis S I .A Multi-State Constraint Kalman Filter for Vision-Aided Inertial Navigation[C]// Robotics and Automation, 2007 IEEE International Conference on. IEEE, 2007.

[35] Sun K, Mohta K, Pfrommer B, et al. Robust stereo visual inertial odometry for fast autonomous flight[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 965-972.

[36] Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]// 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2015: 298-304.

[37] Li P, Qin T, Hu B, et al. Monocular visual-inertial state estimation for mobile augmented reality[C]// 2017 IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, 2017: 11-21.

[38] Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.

[39] Yoon K, Kweon I S. Adaptive support-weight approach for correspondence search[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4): 650-656.

[40] Triggs B. Bundle Adjustment - A Modern Synthesis[C]// International Workshop on Vision Algorithms: Theory & Practice.Springer-Verlag,1999.

[41] Qin T, Pan J, Cao S, et al. A general optimization-based framework for local odometry estimation with multiple sensors[J]. arXiv preprint arXiv:1901.03638, 2019.

[42] Campos C, Elvira R, Rodríguez J J G, et al. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.

[43] Smith R, Self M, Cheeseman P. Estimating Uncertain Spatial Relationships in Robotics[J]. Machine Intelligence & Pattern Recognition,1988, 5(5): 435-461.

[44] Chen Y, Zhou Y, Lv Q, et al. A Review of V-SLAM *[C]// 2018 IEEE International Conference on Information and Automation (ICIA). IEEE, 2018.

[45] D Nistér, Naroditsky O, Bergen J R. Visual odometry for ground vehicle applications[J]. Journal of Field Robotics, 2010, 23(1): 3-20.

[46] Lin X, Wang F, Guo L, et al. An automatic key-frame selection method for monocular visual odometry of ground vehicle[J]. IEEE Access, 2019, 7: 70742-70754.

[47] Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE Transactions on Robotics, 2014, 30(1): 177-187.

[48] Lee G H, Fraundorfer F, Pollefeys M. RS-SLAM: RANSAC sampling for visual FastSLAM[C]// IEEE/RSJ International Conference on Intelligent Robots & Systems. IEEE, 2011.

[49] Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras[C]// Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013.

[50] Wolf W . Key frame selection by motion analysis[C]// Acoustics, Speech & Signal Processing, on Conference, IEEE International Conference. IEEE Computer Society, 1996: 1228-1231.

[51] Zhuang Y, Rui Y, Huang T S, et al. Adaptive key frame extraction using unsupervised clustering[C]// International Conference on Image Processing. IEEE, 2002.

[52] Fanfani, Marco, Bellavia, et al. Accurate keyframe selection and keypoint tracking for robust visual odometry.[J]. Machine Vision & Applications, 2016.

[53] Engel J, Koltun V, Cremers D. Direct Sparse Odometry[J]. arXiv e-prints, 2016.

[54] Chen W, Zhu L, Lin X, et al. Dynamic strategy of keyframe selection with pd controller for vslam systems[J]. IEEE/ASME Transactions on Mechatronics, 2021, 27(1): 115-125.

[55] Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms[J]. International journal of computer vision, 2002, 47: 7-42.

[56] Jégou H, Douze M, Schmid C. Improving bag-of-features for large scale image search[J]. International journal of computer vision, 2010, 87: 316-336.

[57] Bruce V, Henderson Z, Newman C, et al. Matching identities of familiar and unfamiliar faces caught on CCTV images[J]. Journal of Experimental Psychology: Applied, 2001, 7(3): 207.

[58] Birchfield S, Tomasi C. A pixel dissimilarity measure that is insensitive to image sampling[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(4): 401-406.

[59] Zabih R, Woodfill J. Non-parametric local transforms for computing visual correspondence[C]// Computer Vision—ECCV'94: Third European Conference on Computer Vision Stockholm, Sweden, May 2–6 1994 Proceedings, Volume II 3. Springer Berlin Heidelberg, 1994: 151-158.

[60] Y. Chang, Y. Ho. Modified sad using adaptive window sizes for efficient stereo matching[C]. International Conference on Embedded Systems and Intelligent Technology, 2014, 9­11

[61] Mei X, Sun X, Zhou M, et al. On building an accurate stereo matching system on graphics hardware[C]// 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops). IEEE, 2011: 467-474.

[62] 王云峰, 吴炜, 余小亮等. 基于自适应权重AD-Census变换的双目立体匹配[J]. 工程科学与技术, 2018, 50(04): 153-160.

[63] Heiko Hirschmüller. Stereo Processing by Semiglobal Matching and Mutual Information[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008: 328-341.

[64] Geiger A, Roser M, Urtasun R. Efficient large-scale stereo matching[C]// Asian conference on computer vision. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010: 25-38.

[65] 粟序明, 方成刚, 洪荣晶等. 基于机器视觉的轴类零件定位与测量系统[J]. 机械设计与制造, 2020, (07): 250-254.

[66] Rosten E, Drummond T. Machine learning for high-speed corner detection[C]//Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9. Springer Berlin Heidelberg, 2006: 430-443.

[67] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision, 2004, 60: 91-110.

[68] Bay H, Tuytelaars T, Van Gool L. Surf: Speeded up robust features[C]//Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9. Springer Berlin Heidelberg, 2006: 404-417.

[69] Rublee E, Rabaud V, Konolige K, et al. ORB: an efficient alternative to SIFT or SURF[C]// IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain, November 6-13, 2011. IEEE, 2011.

[70] Besl P J, McKay N D. Method for registration of 3-D shapes[C]// Sensor fusion IV: control paradigms and data structures. Spie, 1992, 1611: 586-606.

[71] Indelman V, Williams S, Kaess M, et al. Information fusion in navigation systems via factor graph based incremental smoothing[J]. Robotics and Autonomous Systems, 2013, 61(8): 721-738.

[72] Nocedal J, Wright S J. Quadratic programming[J]. Numerical optimization, 2006: 448-492.

[73] Dellaert F, Kaess M. Factor graphs for robot perception[J]. Foundations and Trends® in Robotics, 2017, 6(1-2): 1-139.

[74] Foresee F D, Hagan M T. Gauss-Newton approximation to Bayesian learning[C]// Proceedings of international conference on neural networks (ICNN'97). IEEE, 1997, 3: 1930-1935.

[75] Moré J J. The Levenberg-Marquardt algorithm: implementation and theory[C]// Numerical Analysis: Proceedings of the Biennial Conference Held at Dundee, June 28–July 1, 1977. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006: 105-116.

[76] Gálvez-López D, Tardos J D. Bags of binary words for fast place recognition in image sequences[J]. IEEE Transactions on robotics, 2012, 28(5): 1188-1197.

[77] Hartigan J A, Wong M A. Algorithm AS 136: A k-means clustering algorithm[J]. Journal of the royal statistical society. series c (applied statistics), 1979, 28(1): 100-108.

[78] Selim S Z, Ismail M A. K-means-type algorithms: A generalized convergence theorem and characterization of local optimality[J]. IEEE Transactions on pattern analysis and machine intelligence, 1984 (1): 81-87.

[79] Chaudhuri D, Chaudhuri B B. A novel multiseed nonhierarchical data clustering technique[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 1997, 27(5): 871-876.

[80] Hosseininaveh A, Serpico M, Robson S, et al. Automatic image selection in photogrammetric multi-view stereo methods[C]. Eurographics Association, 2012.

[81] Land E H. An alternative technique for the computation of the designator in the retinex theory of color vision[J]. Proceedings of the national academy of sciences, 1986, 83(10): 3078-3080.

[82] Jobson D J, Rahman Z, Woodell G A. A multiscale retinex for bridging the gap between color images and the human observation of scenes[J]. IEEE Transactions on Image processing, 1997, 6(7): 965-976.

[83] Sturm J, Engelhard N, Endres F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]// 2012 IEEE/RSJ international conference on intelligent robots and systems. IEEE, 2012: 573-580.

[84] Scharstein D, Hirschmüller H, Kitajima Y, et al. High-resolution stereo datasets with subpixel-accurate ground truth[C]// Pattern Recognition: 36th German Conference, GCPR 2014, Münster, Germany, September 2-5, 2014, Proceedings 36. Springer International Publishing, 2014: 31-42.

中图分类号:

 TD676    

开放日期:

 2028-06-13    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式