论文中文题名: | 基于无人机影像的高精度DSM、TDOM生成研究 |
姓名: | |
学号: | 21210061038 |
保密级别: | 保密(1年后开放) |
论文语种: | chi |
学科代码: | 081602 |
学科名称: | 工学 - 测绘科学与技术 - 摄影测量与遥感 |
学生类型: | 硕士 |
学位级别: | 工学硕士 |
学位年度: | 2024 |
培养单位: | 西安科技大学 |
院系: | |
专业: | |
研究方向: | 三维重建 |
第一导师姓名: | |
第一导师单位: | |
论文提交日期: | 2024-06-14 |
论文答辩日期: | 2024-06-02 |
论文外文题名: | Research on high-precision DSM and TDOM Generation based on UAV Imagery |
论文中文关键词: | |
论文外文关键词: | UAV Images ; Dense Matching ; Image Segmentation ; Digital Surface Model ; True Digital Ortho Map |
论文中文摘要: |
数字表面模型(DSM)和真正射影像(TDOM)作为地理空间数据的重要组成部分,承载着丰富的地表信息,对于地理信息系统、城市规划、环境监测等领域具有不可替代的重要性。现有方法中,通常利用航空遥感影像密集匹配算法生成视差或深度图,然后对其进行高程融合生成DSM,接着通过遮蔽区域检测、数字微分纠正等步骤生成TDOM。 无人机影像具有获取便捷、成本低廉、信息丰富等优点,目前已成为摄影测量空间信息获取的重要数据源。但由于无人机影像存在基高比小、飞行姿态不稳定等不利因素,影像质量远不如常规航空遥感影像,由此导致诸如DSM和TDOM等摄影测量数据产品生产存在质量低等问题。为此,本论文研究针对基于无人机影像高精度DSM、TDOM数据生产过程各重要环节数据精度及效率展开研究,以寻求对现有方法进行改进与优化,进而推动高精度DSM和TDOM的数据生产。 为完成论文研究目标,论文研究基于摄影测量计算机视觉方法就DSM和TDOM数据生产过程中的多视影像密集匹配、高精度深度图自动生成及遮蔽区域目标检测等内容进行深入研究,具体研究内容包括: 在DSM生产过程中,针对传统密集匹配方法在弱纹理、大高差、被遮挡等区域错误匹配导致精度下降的问题,提出一种基于影像分割的无人机影像密集匹配算法。结合聚类分割和多尺度影像深度估计方法,在对影像进行超像素分割的基础上,通过RANSAC方法拟合超像素物方平面,将属于弱纹理区域的超像素边缘像素的深度和法向量估计值进行PatchMatch平面传播,提升了深度图的精度。标准MVS数据集的定性和定量试验结果证明:本文算法相较于COLMAP算法在深度估计方面具有更高的精度和准确性,深度估计的均方根误差从0.74降至0.38,准确性提升了48.65%。 针对密集匹配生成的深度图中,由于密集匹配结果不准确而产生的类似“椒盐”噪声的问题,提出了一种基于自适应联合双边滤波的深度图优化方法,采用四方向深度值补全和自适应联合双边滤波操作,提升深度图的精度和完整性。无人机影像数据集试验结果证明:优化后的深度图能够有效地填补深度异常像素,突出建筑物的边缘结构特征,相较于COLMAP算法,本文算法效率提升47.38%。 针对TDOM生成过程中,采用的传统遮蔽区域检测算法效率低的问题,提出了一种基于CUDA多线程并行的遮蔽区域检测方法,包括对影像划分区域和计算增量数组等步骤,相比CPU端效率提升94.87%。同时,针对多张TDOM拼接过程中存在的色调不一致、纹理缺失等问题,通过分块生成、匀光匀色、纹理补全等多种优化操作,实现高精度TDOM的生成。以多组无人机影像数据集和标准的航空摄影数据集为实验数据,定性定量对比试验结果表明:本文算法在生成DSM和TDOM方面,尤其是在处理弱纹理区域建筑物结构时,相对于其他先进的国外商业软件(SURE、Pix4D、PhotoScan)具有更高的精度和效率。 |
论文外文摘要: |
Digital Surface Model (DSM) and True Digital Ortho Map (TDOM), as two essential components in Geospatial data, carry abundant surface information and are indispensable for geographic information systems, urban planning, environmental monitoring, and other fields. Existing methods typically utilize dense matching algorithms for UAV images to first generate either a disparity or depth map, followed by elevation fusion to produce DSM. Subsequently, steps such as occlusion detection and numerical differentiation correction are employed to generate TDOM. With the advantages of easy to obtain, low cost and rich information, UAV images have become an important data source for photogrammetric spatial information acquisition. However, due to unfavorable factors such as small base-to-height ratio and unstable flight attitude, the image quality of UAV images is far inferior to that of conventional airborne remote sensing images, which leads to the problems of low quality in the production of photogrammetric data products such as DSM and TDOM. In this regard, this thesis study focuses on the data accuracy and efficiency of each important link in the production process of high-precision DSM and TDOM data based on UAV imagery, in order to seek to improve and optimize the existing methods, and then to promote high-precision and large-scene DSM and TDOM data production. In order to fulfil the research objectives of the thesis, the thesis conducts an in-depth study on the dense matching of multi-view images, the automatic generation of high-precision depth maps, and the detection of targets in occluded areas in the process of DSM and TDOM data production based on the photogrammetric computer vision method, which includes the following specific research contents: In the process of DSM generation, the study proposes a dense matching algorithm for UAV images based on image segmentation to address the problem of accuracy degradation caused by mismatching in areas with texture-less, large height differences, and occlusion using traditional dense matching methods. By combining clustering segmentation with multi-scale image depth estimation methods, the algorithm segments images into superpixels and fits the planar surfaces of superpixels using the RANSAC method, followed by PatchMatch plane propagation of depth and normal vector estimates for edge pixels in regions with texture-less. This effectively improves the accuracy of the depth map. Qualitative and quantitative experiments on standard MVS datasets demonstrate that the proposed method achieves higher accuracy and precision in depth estimation compared to COLMAP, reducing the RMSE of depth estimation from 0.74 to 0.38 and increasing accuracy by 48.65%. Aiming at the problem of "salt and pepper" noise in the depth map generated by dense matching due to inaccurate matching results, this study proposes a depth map optimization method based on adaptive joint bilateral filter, which employs four-directional depth value completion and adaptive joint bilateral filter operations to further improve the accuracy and integrity of the depth map. Experimental results on a UAV image dataset demonstrate that the optimized depth map effectively fills in depth abnormal pixels, highlighting the edge structure features of buildings. Moreover, the proposed method improves efficiency by 47.38% compared to COLMAP. In the process of TDOM generation, this study proposes a CUDA-based multi-threaded parallel shadow area detection method to address the inefficiency of traditional shadow area detection algorithms. By partitioning the image into regions and computing incremental arrays, the efficiency is improved by 94.87% compared to CPU-based methods. Additionally, to address issues such as inconsistent color tones and texture loss during the stitching of multiple TDOM, this study employs various optimization techniques including block-wise generation, uniform illumination and color correction, and texture completion. Qualitative and quantitative comparisons utilizing various sets of UAV image datasets and standard aerial photography datasets demonstrate that the proposed method achieves higher precision and efficiency in DSM and TDOM generation, especially in handling building structures in texture-less areas. This method exhibits superior accuracy and efficiency compared to other advanced foreign commercial software (such as SURE, Pix4D, and PhotoScan). |
中图分类号: | P231 |
开放日期: | 2025-06-14 |