论文中文题名: |
基于改进相关滤波的目标跟踪算法研究
|
姓名: |
罗佳琪
|
学号: |
20208088022
|
保密级别: |
公开
|
论文语种: |
chi
|
学科代码: |
083500
|
学科名称: |
工学 - 软件工程
|
学生类型: |
硕士
|
学位级别: |
工学硕士
|
学位年度: |
2023
|
培养单位: |
西安科技大学
|
院系: |
计算机科学与技术学院
|
专业: |
软件工程
|
研究方向: |
人工智能与信息处理
|
第一导师姓名: |
厍向阳
|
第一导师单位: |
西安科技大学
|
论文提交日期: |
2023-06-13
|
论文答辩日期: |
2023-06-05
|
论文外文题名: |
Object Tracking Method Based on Improved Correlation Filter
|
论文中文关键词: |
目标跟踪 ; 相关滤波 ; 稀疏表示 ; 卷积特征 ; 多特征融合
|
论文外文关键词: |
object tracking ; correlation filter ; sparse representation ; convolutional features ; multi-features fusion
|
论文中文摘要: |
︿
随着人工智能的迅速发展,目标跟踪技术在生活中得到了越来越广泛的应用。近年来,基于相关滤波的目标跟踪算法因其具有跟踪效果好和计算效率高的优势,受到了国内外研究学者的重点关注。但由于目标跟踪技术的实际应用场景复杂多变,经常存在目标形变、背景干扰和目标遮挡的问题,影响跟踪算法的准确性。为了解决上述问题,本文提出了两种基于改进相关滤波的目标跟踪算法。本文的主要工作如下:
针对基于相关滤波的目标跟踪算法在目标形变、背景干扰的复杂场景中,易受外界干扰信息影响导致跟踪失败的问题,提出了基于稀疏表示的相关滤波目标跟踪算法。该算法:①将稀疏表示与相关滤波结合,在目标函数中引入L1范数惩罚项,使训练出的相关滤波器只含有目标的关键特征;②根据相关滤波器系数的空间位置为其分配不同的惩罚参数,以便更好地保留相关滤波器中的有效信息;③采用ADMM算法求解相关滤波器,保证算法的实时性。实验表明:在目标形变、背景干扰的复杂场景中,基于稀疏表示的相关滤波目标跟踪算法能够增强相关滤波器对外界干扰信息的鲁棒性,提高了目标跟踪算法的准确性。
针对基于相关滤波的目标跟踪算法在目标遮挡的情况下,易从图像中学习到错误的背景信息导致跟踪漂移的问题,提出了基于多特征融合的相关滤波目标跟踪算法。该算法:①同时提取跟踪目标的卷积特征、HOG特征和CN特征,增强特征对目标的描述能力;②根据每种特征的可靠性自适应调整特征融合权重,使多种特征更好地进行优势互补;③根据响应图的APCE值自适应调整目标模板的学习率,避免在目标遮挡情况下目标模板学习到错误的目标表征信息。实验表明:在目标遮挡的情况下,基于多特征融合的相关滤波目标跟踪算法能够有效减少跟踪漂移发生,提高了目标跟踪算法的准确性。
﹀
|
论文外文摘要: |
︿
With the rapid development of artificial intelligence, object tracking has been more and more widely used in our life. In recent years, due to the advantages of excellent tracking accuracy and high efficiency, object tracking methods based on correlation filter have attracted attention by domestic and foreign research scholars. However, due to practical application scenarios of object tracking methods have complex and variable characteristics,there are complex situations, such as deformation, background clutters and occlusion, which have brought great challenges to achieve excellent tracking accuracy. In order to solve the above problems, the paper proposes two object tracking methods based on improved correlation filter. The main research contents of the paper are as follows:
In order to solve the problem that object tracking methods based on correlation filter are susceptible to tracking failure due to distractive features in complex situations with deformation and background clutters, a correlation filter for object tracking method based on sparse representation is proposed. The method: ①combines sparse representation with correlation filter by using L1 norm in objective function, so that the trained correlation filter only contains the key features of the object; ②assigns different penalty parameters to coefficients according to the spatial position of correlation filter in order to better retain the effective information in correlation filter; ③uses the ADMM to solve correlation filter to ensure the real-time of the method. The experimental results show that: in complex situations with deformation and background clutters, the method can enhance the robustness of correlation filter to distractive features and improve the tracking accuracy.
In order to solve the problem that object tracking methods based on correlation filter are susceptible to tracking drift due to correlation filter are prone to learning background features from images in complex situations with occlusion, a correlation filter for object tracking method based on multi-features fusion is proposed. The method: ①extracts the convolutional features, HOG and CN to enhance the description of the object by multi-features; ②adjusts the features fusion weights according to the reliability of each feature to improve the adaptability of multi-features; ③adaptively adjusts the learning rate of the object template according to the APCE of the response map to avoid the object template learning the wrong features. The experimental results show that: the method can effectively reduce the tracking drift in complex situations with occlusion and improve the tracking accuracy.
﹀
|
参考文献: |
︿
[1] Marvasti-Zadeh S, Cheng L, Ghanei-Yakhdan H, et al. Deep learning for visual tracking: a comprehensive survey[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(05): 3943-3968. [2] Zhang J, He Y, Feng W, et al. Learning background-aware and spatial-temporal regularized correlation filters for visual tracking[J]. Applied Intelligence, 2023, 53(07): 7697–7712. [3] Wang Y, Luo X, Ding L, et al. Detection based visual tracking with convolutional neural network[J]. Knowledge-Based Systems, 2019, 175(01): 62-71. [4] Qi Y, Zhang S, Qin L, et al. Hedging deep features for visual tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(05): 1116-1130. [5] Fu C, Xu J, Lin F, et al. Object saliency-aware dual regularized correlation filter for real-time aerial tracking[J]. IEEE Transactions on Geo science and Remote Sensing, 2020, 58(12): 8940-8951. [6] 范向民, 范俊君, 田丰, et al. 人机交互与人工智能: 从交替浮沉到协同共进[J]. 中国科学: 信息科学, 2019, 49(03): 361-368. [7] He R, Wu X, Sun Z, et al. Wasserstein CNN: learning invariant features for Nir-vis face recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41(07): 1761-1773. [8] 张新钰, 高洪波, 赵建辉, et al. 基于深度学习的自动驾驶技术综述[J]. 清华大学学报(自然科学版), 2018, 58(04): 438-444. [9] Zhang J, Yuan T, He Y, et al. A background-aware correlation filter with adaptive saliency-aware regularization for visual tracking[J]. Neural Computing and Applications, 2022, 34(08):6359-6376. [10] Wang M, Cheng B, Yuen C. Joint coding-transmission optimization for a video surveillance system with multiple cameras[J]. IEEE Transactions on Multimedia, 2018, 20(03): 620-633. [11] Ben Mabrouk A, Zagrouba E. Abnormal behavior recognition for intelligent video surveillance systems: a review[J]. Expert Systems with Applications, 2018, 91: 480-491. [12] 李昂, 王晟全, 郑宝玉, et al. 改进的用于军用车辆目标检测的RetinaNet [J]. 电光与控制, 2020, 27(10): 78-98. [13] Dong X, Shen J, Wu D, et al. Quadruplet network with one-shot learning for fast visual object tracking[J]. IEEE Transactions on Image Processing, 2019, 28(07): 3516-3527. [14] An Z, Wang X, Li B, et al. Robust visual tracking for UAVs with dynamic feature weight selection[J]. Applied Intelligence, 2023, 53(04): 3836–3849. [15] Zhu H, Xue M, Wang Y, et al. Fast visual tracking With Siamese oriented region proposal network[J]. IEEE Signal Processing Letters, 2022, 29: 1437-1441. [16] Zhou Y, Li J, Du B, et al. Learning adaptive updating Siamese network for visual tracking[J]. Multimedia tools and applications, 2021, 80(19): 29849-29873. [17] Sun Y, Sun C, Wang D, et al. Roi pooled correlation filters for visual tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2019: 5783-5791. [18] Javed S, Danelljan M, Khan F S, et al. Visual object tracking with discriminative filters and Siamese networks: A survey and outlook[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(05): 6552-6574. [19] Nai K, Li Z, Li G, et al. Robust object tracking via local sparse appearance model[J]. IEEE Transactions on Image Processing, 2018, 27(10): 4958-4970. [20] Mei X, Ling H. Robust visual tracking using L1 minimization[C]// IEEE International Conference on Computer Vision. 2009: 1436-1443. [21] Li H, Shen C, Shi Q. Real-time visual tracking using compressive sensing[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2011:1305-1312. [22] Bao C, Wu Y, Ling H, et al. Real time robust L1 tracker using accelerated proximal gradient approach[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2012: 1830-1837. [23] Jia X, Lu H C, Yang M H. Visual tracking via adaptive structural local sparse appearance model[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2012: 1822-1829. [24] Sui Y, Tang Y, Zhang L, et al. Visual tracking via subspace learning: a discriminative approach[J]. International Journal of Computer Vision, 2018, 126(05): 515-536. [25] Black M J, Jepson A D. Eigen tracking: robust matching and tracking of articulated objects using a view-based representation[J]. International Journal of Computer Vision, 1998, 26(01): 63-84. [26] Ross D A, Lim J, Lin R S, et al. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision, 2008, 77(1-3):125-141. [27] Wen J, Li X L, Gao X B, et al. Incremental learning of weighted tensor subspace for visual tracking[C]// IEEE International Conference on Systems. 2009: 3688-3693. [28] Liu S, Liu D, Muhammad K, et al. Effective template update mechanism in visual tracking with background clutter[J]. Neuro Computing, 2021, 458(11): 615-625. [29] Li Y, Fu C, Ding F, et al. Augmented memory for correlation filters in real-time UAV tracking[C]// International conference on Intelligent Robots and Systems. 2020: 1559-1566. [30] Huang Z, Fu C, Li Y, et al. Learning aberrance repressed correlation filters for real-time UAV tracking[C]// IEEE International Conference on Computer Vision. 2019: 2891-2900. [31] Bolme D, Beveridge J, Draper B, et al. Visual object tracking using adaptive correlation filters[C]// IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2010: 2544-2550. [32] Henriques J, Caseiro R, Martins P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015, 37(03):583-596. [33] Bertinetto L, Valmadre J, Golodetz S, et al. Staple: Complementary learners for real-time tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2016: 1401-1409. [34] Danelljan M, Bhat G, Khan F, et al. ECO: Efficient Convolution Operators for Tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2017: 6931-6939. [35] Li Y, Zhu J. A scale adaptive kernel correlation filter tracker with feature integration[C]// European Conference on Computer Vision. 2014: 254-265. [36] Danelljan M, Hager G, Khan F, et al. Discriminative scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(08): 1561-1575. [37] Danelljan M, Hager G, Khan F, et al. Learning spatially regularized correlation filters for visual tracking[C]// IEEE International Conference on Computer Vision. 2015: 4310-4318. [38] Galoogahi H K, Fagg A, Lucey S. Learning background-aware correlation filters for Visual Tracking[C]// IEEE International Conference on Computer Vision.2017: 1144-1152. [39] Liang Z, Shen J. Local semantic Siamese networks for fast tracking[J]. IEEE Transactions on Image Processing, 2019, 29(99): 3351-3364. [40] 孟琭, 李诚新. 近年目标跟踪算法短评——相关滤波与深度学习[J]. 中国图象图形学报, 2019, 24(7): 1011-1016. [41] Nam H, Han B. Learning multi-domain convolutional neural networks for visual tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2016: 4293-4302. [42] Wang L, Ouyang W, Wang X, et al. Visual tracking with fully convolutional networks[C]// IEEE international conference on computer vision. 2015: 3119-3127. [43] Bertinetto L, Valmadre J, Henriques J et al. Fully-convolutional Siamese networks for object tracking[C]// European Conference on Computer Vision. 2016: 850-865. [44] Li B, Yan J, Wu W, et al. High-performance visual tracking with Siamese region proposal network[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8971-8980. [45] Gupta D, Arya D, Gavves E. Rotation equivariant Siamese networks for tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2021:12362-12371. [46] Cheng S, Zhong B, Li G, et al. Learning to filter: Siamese relation network for robust tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2021: 4421-4431. [47] Henriques J, Caseiro R, Martins P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]// European conference on computer vision. 2012: 702-715. [48] Wang M, Liu Y, Huang Z. Large margin object tracking with circulant feature maps[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2017: 4021-4029. [49] Gray R. Toeplitz and circulant matrices: A review[J]. Foundations and Trends in Communications and Information Theory, 2006, 2(03): 155-239. [50] Hare S, Golodetz S, Saffari A, et al. Struck: Structured Output Tracking with Kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096-2109. [51] Wright J, Yang A, Ganesh A, et al. Robust face recognition via sparse representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(02): 210-227. [52] Natarajan B. Sparse approximate solutions to linear systems[J]. SIAM Journal on Computing, 1995, 24(02): 227-234. [53] Ji Z, Wang W. Correlation filter tracker based on sparse regularization[J]. Journal of Visual Communication and Image Representation, 2018, 55(08): 354-362. [54] Cun Y, Boser B, Denker J, et al. Handwritten digit recognition with a back-propagation network[C]// Advances in neural information processing systems. 1990: 396-404. [55] Wu Y, Lim J, Yang M. Online object tracking: A benchmark[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2013, 2411-2418. [56] [55] Wu Y, Lim J, Yang M. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(09): 1834-1848. [57] Liang P, Blasch E, Ling H. Encoding color information for visual tracking: algorithms and benchmark[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5630-5644. [58] Wang N, Song Y, Ma C, et al. Unsupervised Deep Tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2020: 1308-1317. [59] Dalal N, Triggs B. Histograms of oriented gradients for human detection[J]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, 1: 886-893. [60] 胡世达. 复杂场景下基于相关滤波的目标跟踪算法研究[D]. 西安: 电子科技大学, 2020.
﹀
|
中图分类号: |
TP391.4
|
开放日期: |
2023-06-14
|