论文中文题名: |
基于孪生网络和相关滤波融合的目标跟踪算法研究
|
姓名: |
林歆钰
|
学号: |
18207042026
|
保密级别: |
公开
|
论文语种: |
chi
|
学科代码: |
081002
|
学科名称: |
工学 - 信息与通信工程 - 信号与信息处理
|
学生类型: |
硕士
|
学位级别: |
工学硕士
|
学位年度: |
2021
|
培养单位: |
西安科技大学
|
院系: |
通信与信息工程学院
|
专业: |
信号与信息处理
|
研究方向: |
图像处理
|
第一导师姓名: |
侯颖
|
第一导师单位: |
西安科技大学
|
论文提交日期: |
2021-06-21
|
论文答辩日期: |
2021-06-05
|
论文外文题名: |
Research on Correlation Filter and Siamese Network Hybrid Algorithmfor Visual Object Tracking
|
论文中文关键词: |
机器视觉 ; 深度学习 ; 目标跟踪 ; 相关滤波 ; 孪生网络
|
论文外文关键词: |
Machine Vision ; Deep Learning ; Target Tracking ; Correlation Filter ; Siamese Network
|
论文中文摘要: |
︿
目标跟踪技术广泛应用在安防监控、自动驾驶、军事制导等领域。当前主流的目标跟踪算法分为基于相关滤波的跟踪算法和基于孪生网络的跟踪算法两大类。相关滤波目标跟踪算法以其高效的运算速度和良好的跟踪精度备受关注,但是对于尺度变化、快速运动等情况容易造成跟踪失败;孪生网络目标跟踪算法借助表征能力强大的深度特征获得更优的跟踪精度和鲁棒性,但对于严重形变、背景复杂等情况容易跟踪失败。
通过分析核相关滤波跟踪算法和孪生网络跟踪算法的优缺点,本文提出了基于孪生网络和相关滤波的分组融合目标跟踪算法。首先对视频序列进行固定帧数分组,随后在组内采用KCF算法进行快速目标跟踪,每组最后一帧利用孪生网络目标跟踪算法进一步校正,改善KCF算法面临尺度变化、目标遮挡等问题跟踪性能不佳的情况,避免错误累积导致的跟踪失败。此外本文算法还采用两种改进策略有效提高了跟踪性能:(1)提出一种线性孪生网络特征模板更新策略,使得SiamFC算法有效保留第一帧多数目标特征的基础上,引入目标外观特征变化信息,从而能更有效地应对视频目标复杂多变的情况。(2)提出一种多分辨率分段预处理策略,使得KCF算法能有效地提高对低分辨视频目标的跟踪性能,从而提高改进算法整体性能。
在OTB-2013和OTB-100两个数据集上对比当前先进的目标跟踪算法,实验结果表明本文改进算法能够有效地提高目标跟踪性能。在OTB-2013数据集上本文改进算法比SiamFC、KCF算法的精确度分别提高了4.1%和7.5%,成功率分别提高了4.3%和10.0%。在OTB-100数据集上本文改进算法比SiamFC、KCF算法的精确度分别提高了4.4%和10.1%,成功率分别提高了3.5%和11.8%。特别是对于亮度变化、尺度变化、快速运动、运动模糊、平面内/外旋转及目标出视野等视频属性,本文改进算法比多种先进跟踪算法有显著的性能优势。
﹀
|
论文外文摘要: |
︿
~Target tracking technology is widely used in security surveillance, autonomous driving, military guidance and other fields.The mainstream target tracking algorithms are divided into two categories: the tracking algorithm based on correlation filter and the tracking algorithm based on Siamese Network.Target tracking algorithms based on correlation filterhave attracted attention due to their high-efficiency calculation speed and excellent tracking accuracy, but these are easy to track failure ofscale variation, fast motion and other scenarios. Target tracking algorithms based on Siamese Network have better tracking accuracy and robustness due to the strong characterization ability of deep feature, but these are easy to track failure ofdeformation, background clutters and other scenarios.
By analyzing the characteristics oftarget tracking algorithms based on correlation filter and target tracking algorithms based on Siamese Network, a hybrid visual object trackingbased on correlation filter and Siamese Network is proposed.First, the video sequence is grouped by a fixed number of frames.Subsequently, in the group, KCF is used for fast target tracking, and in the last frame of each group, the target tracking algorithm based on Siamese Network is used for correction to improve the tracking performance degradation of KCF algorithm in the scale variation, occlusion and other situations, so as to avoid tracking failure caused by error accumulation.Furthermore, two improvement strategies are adopted to improve the tracking performance of the algorithm: (1)A linear feature template updating strategy is proposed, which makes the SiamFC algorithm introduce the change information of the target appearance feature and retain most target features in the first frame simultaneously, so as to deal with the complex and changeable situation of the video target effectively. (2) A multi-resolution segmenting preprocessing strategy is proposed to improve the tracking performance of KCF algorithm in low resolution video.
Compared with the current advanced target tracking algorithm in OTB-2013 and OTB-100 datasets, the experimental results show that the proposed algorithm can effectively improve the tracking performance.On theOTB-2013 dataset, the accuracy of improved algorithm outperforms SiamFC andKCF by 4.1% and 7.5% respectively, the success rate ofimproved algorithm outperforms SiamFC andKCF by 4.3% and 10.0% respectively. On theOTB-100 dataset, the accuracy of improved algorithm outperforms SiamFC andKCF by 4.4% and 10.1% respectively, the success rate ofimproved algorithm outperforms SiamFC andKCF by 3.5% and 11.8% respectively. The improved algorithm performs favorably against the state-of-the-art trackers with challenging factors, especially for the video of illuminationvariation, scale variation, rotation, fast motion, motion blur, in-plane rotation, out-of-plane rotation, and out-of-view.
﹀
|
参考文献: |
︿
[1] 张微,康宝生.相关滤波目标跟踪进展综述[J].中国图象图形学报,2017,22(08):1017-1033. [2] Peixia Li et al. Deep visual tracking: Review and experimental comparison[J]. Pattern Recognition, 2018, 76: 323-338. [3] 王向军,郭志翼,王欢欢.基于嵌入式平台的低时间复杂度目标跟踪算法[J].红外与激光工程,2019,48(12):273-282. [4] 王蒙蒙. 基于计算机视觉的目标跟踪算法及其应用研究[D].浙江大学,2018. [5] Smeulders Arnold W M et al. Visual Tracking: An Experimental Survey[J]. IEEE transactions on pattern analysis and machine intelligence, 2014, 36(7): 1442-1468. [6] 李大维,莫波,高可,等.针对弹载平台红外小目标的检测与跟踪算法[J].宇航学报,2020,41(11):1440-1448. [7] 孟琭,李诚新.近年目标跟踪算法短评——相关滤波与深度学习[J].中国图象图形学报,2019,24(07):1011-1016. [8] Zhou Rui,Feng Yu,Di Bin, et al. Multi-UAV cooperative target tracking with bounded noise for connectivity preservation[J].Frontiers of Information Technology & Electronic Engineering,2020,21(10):1494-1504. [9] 刘芳,王洪娟,黄光伟,等.基于自适应深度网络的无人机目标跟踪算法[J].航空学报, 2019, 40(03):179-188. [10] 曲蕴杰,莫宏伟,王常虹.一种用于无人机的目标颜色核相关跟踪算法研究[J].红外与激光工程,2018,47(03):272-278. [11] Yi W, Lim J, Yang M H. Online Object Tracking: A Benchmark[C]//Computer Vision & Pattern Recognition. Portland, OR, USA, 2013:2411–2418. [12] Wu Y, Lim J, Yang M H. Object Tracking Benchmark[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015, 37(9):1834-1848. [13] Kristan M, Matas J, Leonardis A, et al. A Novel Performance Evaluation Methodology for Single-Target Trackers[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2016, 38(11):2137-2155. [14] 夏斯维,陈迅.基于光流法与分块思想的目标跟踪算法[J].计算机工程与设计,2020,41(07):2063-2068. [15] 乔少杰,韩楠,朱新文,等.基于卡尔曼滤波的动态轨迹预测算法[J].电子学报,2018,46(02):418-423. [16] 葛宝义,左宪章,胡永江.视觉目标跟踪方法研究综述[J].中国图象图形学报,2018,23(08):1091-1107. [17] 田丹,张国山,张娟瑾.变分调整约束下的反向低秩稀疏学习目标跟踪[J].中国图象图形学报,2020,25(06):1142-1149. [18] 蔡如华,杨标,吴孙勇,等.交互式箱粒子标签多伯努利机动目标跟踪算法[J].自动化学报,2020,46(11):2448-2460. [19] 孟琭,杨旭.目标跟踪算法综述[J].自动化学报,2019,45(07):1244-1260. [20] Bolme D S, Beveridge J R, Draper B A, et al. Visual object tracking using adaptive correlation filters[C]//IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, San Francisco, CA, USA, 2010:2544-2550. [21] Henriques J F, Rui C, Martins P, et al. Exploiting the Circulant Structure of Tracking-by-Detection with Kernels[J].Lecture Notes in Computer Science, 2012, 7575(1):702-715. [22] Navneet Dalal,Bill Triggs. Histograms of Oriented Gradients for Human Detection[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR), San Diego, CA, USA, 2005:886-893. [23] Henriques J F, Caseiro R, Martins P, et al. High-Speed Tracking with Kernelized Correlation Filters[J].IEEE Transactions on Pattern Analysis & Machine Intelligence, 2014, 37(3):583-596. [24] Li Y, Zhu J. A Scale Adaptive Kernel Correlation Filter Tracker with Feature Integration[C]//European Conference on Computer Vision. Cham, 2014:254-265. [25] Danelljan M, Häger G, Khan F S, et al. Accurate Scale Estimation for Robust Visual Tracking[C]// British Machine Vision Conference. 2014:65.1-65.11. [26] Danelljan M, Häger, Gustav, Khan F S, et al. Learning Spatially Regularized Correlation Filters for Visual Tracking[C]//IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015:4310-4318. [27] Danelljan M, Robinson A, Khan F S, et al. Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking[C]//European Conference on Computer Vision.Amsterdam,The Netherlands, 2016:472-488. [28] Danelljan M, Bhat G, Khan F S, et al. ECO: Efficient Convolution Operators for Tracking[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017:6931-6939. [29] Bertinetto L, Valmadre J, Henriques J F, et al. Fully-Convolutional Siamese Networks for Object Tracking [J]. European Conference on Computer Vision.Berlin, Germany: Springer, 2016: 850-865. [30] Nam H, Han B. Learning multi-domain convolutional neural networks for visual tracking[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. Las Vegas, NV, USA,2016:4293-4302. [31] Valmadre J, Bertinetto L, Henriques J, et al. End-to-end representation learning for correlation filter based tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Honolulu, HI, USA, 2017:2805-2813. [32] Li B, Yan J, Wu W, et al. High performance visual tracking with siamese region proposal network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA,2018:8971-8980. [33] Wang Q, Zhang L, Bertinetto L, et al. Fast Online Object Tracking and Segmentation: A Unifying Approach[C]//IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA, 2019:1328-1338. [34] Dai K, Zhang Y,Wang D, et al. High-Performance Long-Term Tracking With Meta-Updater[C]//IEEEConference on Computer Vision and Pattern Recognition (CVPR), Seattle,WA,USA, 2020:6297–6306. [35] Voigtlaender P, Luiten J, Torr P H S, et al. Siam r-cnn: Visual tracking by re-detection[C]//IEEEConference on Computer Vision and Pattern Recognition.Seattle, WA, USA, 2020:6578-6588. [36] 夏亮,张亚,黄友锐,等.融合尺度降维和重检测的长期跟踪算法[J].计算机辅助设计与图形学学报,2021,33(03):385-394. [37] Danelljan M, Khan F S, Felsberg M, et al. Adaptive Color Attributes for Real-Time Visual Tracking[C]//IEEE Conference on Computer Vision & Pattern Recognition, Columbus, OH, USA, 2014:1090-1097. [38] 卢湖川,李佩霞,王栋.目标跟踪算法综述[J].模式识别与人工智能,2018,31(01):61-76. [39] Martin, Danelljan, Gustav, et al. Discriminative Scale Space Tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(8):1561-1575. [40] M. Danelljan,G. Häger, F. S. Khanand M. Felsberg.Convolutional Featuresfor Correlation Filter Based Visual Tracking[C]//IEEE International Conference on Computer Vision Workshop.Santiago, Chile, 2016:621-629. [41] 王广龙,田杰,朱文杰,等.特征融合和自适应权重更新相结合的运动模糊目标跟踪[J].光学精密工程,2019,27(05):1158-1166. [42] 李玺,查宇飞,张天柱,等.深度学习的目标跟踪算法综述[J].中国图象图形学报,2019,24(12):2057-2080. [43] 李勇,杨德东,韩亚君,等.融合扰动感知模型的孪生神经网络目标跟踪[J].光学学报,2020,40(04):120-131. [44] 徐先峰,张丽,郎彬,等.引入感知模型的改进孪生卷积神经网络实现人脸识别算法研究[J].电子学报,2020,48(04):643-647. [45] R. Tao, E. Gavves and A. W. M. Smeulders. Siamese Instance Search for Tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA,2016:1420-1429. [46] Zhu Z, Wang Q, Li B, et al. Distractor-aware siamese networks for visual object tracking[C]//Proceedings of the European Conference on Computer Vision (ECCV). Cham,2018:101-117. [47] Li B, Wu W, Wang Q, et al. SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA, 2020:4277-4286. [48] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [49] Russakovsky O, Deng J, Su H, et al. ImageNet Large Scale Visual Recognition Challenge[J]. International Journal of Computer Vision, 2015, 115(3):211-252. [50] Ren S, He K, Girshick R, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6):1137-1149. [51] 尚欣茹,温尧乐,奚雪峰,等.孪生导向锚框RPN网络实时目标跟踪[J].中国图象图形学报,2021,26(02):415-424.
﹀
|
中图分类号: |
TP391.41
|
开放日期: |
2021-06-21
|