- 无标题文档
查看论文信息

论文中文题名:

 基于视觉伺服的煤矸分拣机器人动态跟踪与控制方法研究    

姓名:

 王虎生    

学号:

 21205016015    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 080202    

学科名称:

 工学 - 机械工程 - 机械电子工程    

学生类型:

 硕士    

学位级别:

 工学硕士    

学位年度:

 2024    

培养单位:

 西安科技大学    

院系:

 机械工程学院    

专业:

 机械工程    

研究方向:

 机器人技术    

第一导师姓名:

 曹现刚    

第一导师单位:

 西安科技大学    

论文提交日期:

 2024-06-17    

论文答辩日期:

 2024-06-03    

论文外文题名:

 Research on dynamic tracking and control method of coal gangue sorting robot based on visual servo    

论文中文关键词:

 煤矸分拣机器人 ; 视觉伺服 ; 图像匹配 ; 目标跟踪 ; 双环模糊PID控制 ; 轨迹规划    

论文外文关键词:

 Coal Gangue Sorting Robot ; Visual Servo ; Image Matching ; Target Tracking ; Dual-Loop Fuzzy PID Control ; Trajectory Planning    

论文中文摘要:

机器人分拣煤矸石是煤炭分选智能化的关键技术。对煤矸石进行精确定位与同步跟踪是提高煤矸石分拣机器人分拣效率的前提。现有技术主要依靠带速对煤矸石位姿进行计算,未考虑煤矸石随皮带运输过程中存在打滑、跑偏等问题,导致机械臂抓取时出现较大误差,影响机器人分拣效率。针对该问题,提出了一种基于视觉伺服的煤矸分拣机器人动态跟踪与控制方法。主要内容包含以下部分:

基于ORB(Oriented FAST and Rotated BRIEF)局部特征点匹配算法框架提出一种融合邻域特征点信息与网格运动统计的煤矸石快速匹配算法。在特征检测与描述阶段,通过比较特征点邻域的平均灰度值差异构建改进后的特征描述符,改进后的描述符可包含更多信息并实现特征点描述符的并行计算,提升了检测精度与效率。在特征匹配阶段,利用模板图像与分拣图像之间的特征对应关系,构建网格运动统计算法对特征点进行快速匹配筛选,将筛选后的特征点通过随机抽样一致性算法进行进一步误匹配点去除,以减少错误匹配,提升算法鲁棒性。最后对改进算法与传统ORB算法在尺度、光照、视角变化下进行实验对比,本文改进算法匹配准确率分别提升了16.7%、36%和22%,平均匹配耗时在40ms以内,平均匹配定位误差为1.29mm,能较好满足煤矸石快速匹配需求。

在匹配结果的基础上,提出了一种改进SiamRPN++(Evolution of Siamese Visual Tracking with Very Deep Networks)的煤矸石精准跟踪算法。采用ResNet50深层特征作为特征提取网络,并在网络中引入SimAM注意力机制,实现目标与背景的准确区分,提升模型定位精度。同时,提出煤矸石背景特征感知模块,根据跟踪置信度判断煤矸石遮挡情况,同时保存未遮挡时的目标背景邻域特征,在遮挡发生时,根据邻域特征对遮挡时的位置进行修正,提升模型的抗遮挡能力。在0.6~1.1m/s带速下,对改进算法与原算法进行了跟踪精度对比实验,本文算法相较原算法跟踪精度提升了4.13%,视觉跟踪成功率平均为96.9%,跟踪速度平均为39FPS,能较好满足煤矸石精准跟踪需求。

在视觉定位的基础上,提出一种基于双环模糊PID的视觉伺服控制算法。通过结合模糊控制策略与PID控制策略,完成动态抓取过程中的PID参数自整定,提升系统的在动态干扰环境中的跟踪同步能力。同时,本文通过位置环、速度环双环控制,实现抓取时机械臂末端与待抓取矸石的位置、速度同步,避免抓取时由于速度不同步导致的冲击问题。对所提视觉伺服控制算法进行了跟踪精度分析,在不同打滑、跑偏工况下平均位置误差均在1mm以内,平均速度误差在1mm/s以内,跟踪位置、速度误差较小,能够满足机械臂稳准抓取煤矸石需求。

论文外文摘要:

Robotic gangue sorting is a key technology for intelligent coal sorting. Accurate positioning and synchronous tracking of gangue are the prerequisites for improving the sorting efficiency of gangue sorting robots. Existing technologies mainly rely on belt speed to calculate the position and posture of gangue, without considering the problems of slippage and deviation of gangue during belt transportation, which leads to large errors when the robot arm grabs it, affecting the robot sorting efficiency. To address this problem, a dynamic tracking and control method for gangue sorting robots based on visual servoing is proposed. The main contents include the following parts:

Based on the ORB (Oriented FAST and Rotated BRIEF) local feature point matching algorithm framework, a fast matching algorithm for gangue is proposed that integrates neighborhood feature point information and grid motion statistics. In the feature detection and description stage, an improved feature descriptor is constructed by comparing the average gray value difference of the feature point neighborhood. The improved descriptor can contain more information and realize the parallel calculation of feature point descriptors, which improves the detection accuracy and efficiency. In the feature matching stage, the feature correspondence between the template image and the sorted image is used to construct a grid motion statistics algorithm to quickly match and screen the feature points. The selected feature points are further removed by a random sampling consistency algorithm to reduce false matching and improve the robustness of the algorithm. Finally, the improved algorithm is compared with the traditional ORB algorithm under changes in scale, illumination, and perspective. The matching accuracy of the improved algorithm in this paper is improved by 16.7%, 36%, and 22%, respectively. The average matching time is within 40ms, and the average matching positioning error is 1.29mm.

Based on the matching results, an improved SiamRPN++ (Evolution of Siamese Visual Tracking with Very Deep Networks) coal gangue precise tracking algorithm is proposed. The deep features of ResNet50 are used as the feature extraction network, and the SimAM attention mechanism is introduced in the network to achieve accurate distinction between the target and the background and improve the positioning accuracy of the model. At the same time, a gangue background feature perception module is proposed to judge the gangue occlusion according to the tracking confidence, and save the target background neighborhood features when it is not occluded. When occlusion occurs, the position at the time of occlusion is corrected according to the neighborhood features to improve the anti-occlusion ability of the model. At a belt speed of 0.6~1.1m/s, the tracking accuracy comparison experiment of the improved algorithm and the original algorithm was carried out. Compared with the original algorithm, the tracking accuracy of the proposed algorithm is improved by 4.13%, the average visual tracking success rate is 96.9%, and the average tracking speed is 39FPS.

Based on visual positioning, a visual servo control algorithm based on dual-loop fuzzy PID is proposed. By combining fuzzy control strategy with PID control strategy, the PID parameter self-tuning in the dynamic grasping process is completed, and the tracking synchronization ability of the system in a dynamic interference environment is improved. At the same time, this paper realizes the position and speed synchronization of the end of the manipulator and the gangue to be grasped during grasping through the dual-loop control of position loop and speed loop, avoiding the impact problem caused by speed asynchrony during grasping. The tracking accuracy of the proposed visual servo control algorithm was analyzed. The average position error under different slipping and deviation conditions was within 1mm, and the average speed error was controlled within 1mm/s. The tracking position and speed errors were small, which can meet the needs of the robotic arm to grasp coal gangue steadily and accurately.

参考文献:

[1] 刘峰, 郭林峰, 张建明, 等. 煤炭工业数字智能绿色三化协同模式与新质生产力建设路径[J]. 煤炭学报, 2024, 49: 1-15.

[2] 王国法, 张良, 李首滨, 等. 煤矿无人化智能开采系统理论与技术研发进展[J]. 煤炭学报, 2023, 48: 34-53.

[3] 任满翊. 无人化智能煤矿建设探索与实践[J]. 工矿自动化, 2022, 48: 27-29.

[4] 尤文顺. 国家能源集团打造“1235”煤矿智能化建设模式 加快推进煤炭工业高质量发展[J]. 智能矿山, 2022, 3: 26-33.

[5] 韩平, 廖健. 关于印发《能源技术革命创新行动计划(2016—2030年)》的通知[J]. 石油石化绿色低碳, 2016, 1: 54.

[6] 葛世荣, 胡而已, 李允旺. 煤矿机器人技术新进展及新方向[J]. 煤炭学报, 2023, 48: 54-73.

[7] 王国法, 任怀伟, 庞义辉, 等. 煤矿智能化(初级阶段)技术体系研究与工程进展[J]. 煤炭科学技术, 2020, 48: 1-27.

[8] 曹现刚, 刘思颖, 王鹏, 等. 面向煤矸分拣机器人的煤矸识别定位系统研究[J]. 煤炭科学技术, 2022, 50: 237-246.

[9] 曹现刚, 吴旭东, 王鹏, 等. 面向煤矸分拣机器人的多机械臂协同策略[J]. 煤炭学报, 2019, 44: 763-774.

[10] 曹现刚, 费佳浩, 王鹏, 等. 基于多机械臂协同的煤矸分拣方法研究[J]. 煤炭科学技术, 2019, 47: 7-12.

[11] 马宏伟, 张烨, 王鹏, 等. 多机械臂煤矸石智能分拣机器人关键共性技术研究[J]. 煤炭科学技术, 2023, 51: 427-436.

[12] 王鹏, 曹现刚, 夏晶, 等. 基于机器视觉的多机械臂煤矸石分拣机器人系统研究[J]. 工矿自动化, 2019, 45: 47-53.

[13] 孙那新. 基于视觉伺服的机械臂同步跟踪煤矸石方法研究[D]. 西安:西安科技大学, 2022.

[14] 孙稚媛. 基于视觉信息的煤矸石自动分拣机器人系统研究[D]. 北京:中国矿业大学, 2021.

[15] 马宏伟, 周文剑, 王鹏, 等. 改进的ORB-FLANN煤矸石识别图像与分拣图像高效匹配方法[J]. 煤炭科学技术, 2024,52: 288-296.

[16] 曹现刚, 刘思颖, 王鹏, 等. 面向煤矸分拣机器人的煤矸识别定位系统研究[J]. 煤炭科学技术, 2022, 50: 237-246.

[17] 刘海桥, 刘萌, 龚子超, 等. 基于深度学习的图像匹配方法综述[J]. 航空学报, 2024, 45: 82-95.

[18] 谢万鹏, 刘欢, 吴银花, 等. 基于改进SIFT和互信息法的单色和彩色视频高精度配准[J]. 液晶与显示, 2023, 38: 1689-1697.

[19] 殷松峰, 王一程, 曹良才, 等. 基于快速傅里叶变换和积分图的快速相关匹配[J]. 光子学报, 2010, 39: 2246-2250.

[20] 张重生, 王斌. 基于序列相似性计算的甲骨残片缀合算法[J]. 电子学报, 2023, 51: 860-869.

[21] 苗延超. 基于点特征的可见光和SAR图像配准技术研究[D]. 长春:中国科学院大学(中国科学院长春光学精密机械与物理研究所), 2022.

[22] 陈易文, 储开斌, 张继, 等. 基于四叉树的ORB-LBP改进算法[J]. 传感器与微系统, 2023, 42: 156-159.

[23] 陈明强, 张勇, 冯树娟, 等. 一种基于改进ORB特征匹配的无人机视觉导航方法[J]. 电讯技术, 2024, 64: 382-389.

[24] 行芳仪, 徐成, 高宏伟. 高效高精度光照自适应的ORB特征匹配算法[J]. 电子测量与仪器学报, 2023, 37: 140-147.

[25] 伍朗, 易诗, 陈梦婷, 等. 基于融合式PC-ORB的异源图像配准算法[J]. 红外技术, 2024, 46: 419-426.

[26] Dujin L, Huawei Z, Haiyan W. Color Image Feature Matching Method Based on the Improved Firework Algorithm[J]. Mathematical Problems in Engineering, 2022, 1: 9447410.

[27] Nelson A, Shaji R S. A novel occluded face detection approach using Enhanced ORB and optimized GAN[J]. International Journal of Wavelets, Multiresolution and Information Processing, 2023, 22: 2350051.

[28] ASB, SR, Saidalavi K, et al. Feature Extraction Based on ORB-AKAZE for Echocardiogram View Classification[J]. International journal of electrical and computer engineering systems, 2023, 14: 393-400.

[29] Zhu J T, Gong C F, Zhao M X, et al. IMAGE MOSAIC ALGORITHM BASED ON PCA-ORB FEATURE MATCHING[J]. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, XLII-3/W10: 83-89.

[30] 尚磊, 王杰, 宋尊师, 等. 基于单目ORB-SLAM2算法的煤矿搜救机器人定位研究[J]. 机床与液压, 2020, 48: 49-52.

[31] 马宏伟, 王岩, 杨林. 煤矿井下移动机器人深度视觉自主导航研究[J]. 煤炭学报, 2020, 45: 2193-2206.

[32] 程健, 闫鹏鹏, 郁华森, 等. 基于有向线段误匹配剔除的煤矿巷道复杂场景图像拼接方法[J]. 煤炭科学技术, 2022, 50: 179-191.

[33] 王梦亭, 杨文忠, 武雍智. 基于孪生网络的单目标跟踪算法综述[J]. 计算机应用, 2023, 43: 661-673.

[34] 马玉民, 钱育蓉, 周伟航, 等. 基于孪生网络的目标跟踪算法综述[J]. 计算机工程与科学, 2023, 45: 1578-1592.

[35] 姜来为, 王策, 杨宏宇. 基于深度学习的多目标跟踪研究进展综述[J/OL]. 吉林大学学报(工学版), 2024-06-13.

[36] 韩瑞泽, 冯伟, 郭青, 等. 视频单目标跟踪研究进展综述[J]. 计算机学报, 2022, 45: 1877-1907.

[37] 刘艺, 李蒙蒙, 郑奇斌, 等. 视频目标跟踪算法综述[J]. 计算机科学与探索, 2022, 16: 1504-1515.

[38] Bolme D S, Beveridge J R, Draper B A, et al. Visual object tracking using adaptive correlation filters[C]//Computer Vision and Pattern Recognition. 2010 IEEE computer society conference on computer vision and pattern recognition. San Francisco, CA, USA: IEEE, 2010: 2544-2550.

[39] Henriques J F, Caseiro R, Martins P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE transactions on pattern analysis and machine intelligence, 2014, 37(3): 583-596.

[40] Danelljan M, Robinson A, Shahbaz Khan F, et al. Beyond correlation filters: Learning continuous convolution operators for visual tracking[C]//Computer Vision – ECCV 2016. ECCV 2016, Lecture Notes in Computer Science(9909). Amsterdam, The Netherlands: Springer, 2016: 472-488.

[41] Danelljan M, Bhat G, Shahbaz Khan F, et al. Eco: Efficient convolution operators for tracking[C]//Computer Vision and Pattern Recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. Piscataway, NJ: IEEE, 2017: 6638-6646.

[42] Dai J, Qi H, Xiong Y, et al. Deformable convolutional networks[C]//International Association for Computer Vision. Proceedings of the IEEE international conference on computer vision. Piscataway, NJ: IEEE, 2017: 764-773.

[43] Bertinetto L, Valmadre J, Henriques J F, et al. Fully-convolutional siamese networks for object tracking[C]//Computer Vision – ECCV 2016 Workshops. ECCV 2016, Lecture Notes in Computer Science(9914). Berlin: Springer, 2016: 850-865.

[44] Guo Q, Feng W, Zhou C, et al. Learning dynamic siamese network for visual object tracking[C]//International Association for Computer Vision. Proceedings of the IEEE international conference on computer vision. Piscataway, NJ: IEEE, 2017: 1763-1771.

[45] Dong X, Shen J. Triplet loss in siamese network for object tracking[C]//European Association for Computer Vision. Proceedings of the European conference on computer vision (ECCV). Berlin: Springer, 2018: 459-474.

[46] Li B, Yan J, Wu W, et al. High performance visual tracking with siamese region proposal network[C]//Computer Vision and Pattern Recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. Piscataway, NJ: IEEE, 2018: 8971-8980.

[47] Zhu Z, Wang Q, Li B, et al. Distractor-aware siamese networks for visual object tracking[C]//European Association for Computer Vision. Proceedings of the European conference on computer vision (ECCV). Berlin: Springer, 2018: 101-117.

[48] 宋建辉, 张甲, 刘砚菊, 等. 基于条件对抗生成孪生网络的目标跟踪[J]. 控制与决策, 2021, 36(5): 1110-1118.

[49] Li B, Wu W, Wang Q, et al. Evolution of siamese visual tracking with very deep networks[C]//Computer Vision and Pattern Recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Long Beach, CA, USA: IEEE, 2019: 15-20.

[50] 黄梦洁, 叶磊, 易凡骁, 等. 面向机器人控制的直接视觉伺服技术发展综述[J/OL]. 控制理论与应用, 2024-06-13.

[51] 李冲. 基于视觉引导的机械臂定位与跟踪算法研究[D].济南:齐鲁工业大学, 2023.

[52] 李猛. 基于深度学习和Kalman-Bucy估计的机械臂目标跟踪方法研究[D].哈尔滨:哈尔滨理工大学, 2022.

[53] Yüksel T. Intelligent visual servoing with extreme learning machine and fuzzy logic[J]. Expert Systems With Applications, 2017, 72: 344-356.

[54] Gans N R, Hu G, Shen J, et al. Adaptive visual servo control to simultaneously stabilize image and pose error[J]. Mechatronics, 2012, 22: 410-422.

[55] Hu H, Woo P Y. Fuzzy Supervisory Sliding-Mode and Neural-Network Control for Robotic Manipulators[J]. IEEE Transactions on Industrial Electronics, 2006, 53: 929-940.

[56] Zhang Y, Li S, Liao B, et al. A recurrent neural network approach for visual servoing of manipulators[C]//IEEE RSA. 2017 IEEE international conference on information and automation (ICIA). Macao, China: IEEE, 2017: 614-619.

[57] 王鹏, 曹现刚, 马宏伟, 等. 基于余弦定理- PID 的煤矸石分拣机器人动态目标稳准抓取算法[J]. 煤炭学报, 2020, 45(12): 4240-4247.

[58] 苏剑波. ADRC 理论和技术在机器人无标定视觉伺服中的应用和发展[J]. 控制与决策, 2015, 30(1): 1-8.

[59] 杨柳. 基于视觉伺服的机械臂分拣系统研究[D]. 西安: 西安建筑科技大学, 2017.

[60] 曹现刚, 李宁, 王鹏, 等. 基于比例导引法的机械臂拣矸过程轨迹规划方法研究[J]. 煤炭工程, 2019, 51(5): 154-158.

[61] 马宏伟, 孙那新, 张烨, 等. 煤矸石分拣机器人动态目标稳定抓取轨迹规划[J]. 工矿自动化, 2022, 48(4): 12-16.

[62] Miao L, Zhang Y, Song Z, et al. A two-step method for kinematic parameters calibration based on complete pose measurement—verification on a heavy-duty robot[J]. Robotics and Computer-Integrated Manufacturing, 2023, 83: 102550.

[63] 孙家乐, 罗晨, 周怡君, 等. 复杂视觉测量系统的标定参数优化及精度评估[J]. 中国机械工程, 2023, 34(14): 1741.

[64] Zhu F, Li H, Li J, et al. Unmanned aerial vehicle remote sensing image registration based on an improved oriented FAST and rotated BRIEF-random sample consensus algorithm[J]. Engineering Applications of Artificial Intelligence, 2023, 126: 106944.

[65] 杨倩兰, 宋丽梅, 黄浩珍, 等. 面向集成制造的改进 ORB 图像匹配方法[J]. 计算机集成制造系统, 2022, 28(7): 2242-2249.

[66] 王晓华, 方琪, 王文杰. 基于网格运动统计的改进快速鲁棒特征图像匹配算法[J]. 模式识别与人工智能, 2019, 32(12): 1133-1140.

[67] Zhu Y, Cao Z, Yang J, et al. Fast multi-feature tracking method based on tightly coupled sensors[J]. Measurement, 2023, 221: 113528.

[68] Suárez I, Sfeir G, Buenaposada J M, et al. BEBLID: Boosted efficient binary local image descriptor[J]. Pattern recognition letters, 2020, 133: 366-372.

[69] Zhu S, Ma W, Yao J. Global and local geometric constrained feature matching for high resolution remote sensing images[J]. Computers and Electrical Engineering, 2022, 103: 108337.

[70] 谢青松, 刘晓庆, 安志勇, 等. 基于前景优化的视觉目标跟踪算法[J]. 电子学报, 2022, 50(7): 1558.

[71] Gao G Q, Zhang Q, Zhang S. Pose detection of parallel robot based on improved RANSAC algorithm[J]. Measurement and Control, 2019, 52(7-8): 855-868.

[72] Song C H, Han H J, Avrithis Y. All the attention you need: Global-local, spatial-channel attention for image retrieval[C]//Computer Vision Society. Proceedings of the IEEE/CVF winter conference on applications of computer vision. Piscataway, NJ: IEEE, 2022: 2754-2763.

[73] Chen J, Chen Y, Li W, et al. Channel and spatial attention based deep object co-segmentation[J]. Knowledge-based systems, 2021, 211: 106550.

[74] Sun H, Zheng X, Lu X, et al. Spectral–spatial attention network for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 58(5): 3232-3245.

[75] Wang Q, Wu B, Zhu P, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks[C]//Computer Vision and Pattern Recognition. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Piscataway, NJ: IEEE, 2020: 11534-11542.

[76] Li B, Yan J, Wu W, et al. High performance visual tracking with siamese region proposal network[C]//Computer Vision and Pattern Recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. Piscataway, NJ: IEEE, 2018: 8971-8980.

[77] Ren S, He K, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2016, 39(6): 1137-1149.

中图分类号:

 TP242.2    

开放日期:

 2024-06-18    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式