论文中文题名: |
锚固孔视觉定位与钻锚机器人自主控制方法研究
|
姓名: |
雷孟宇
|
学号: |
19105016001
|
保密级别: |
保密(1年后开放)
|
论文语种: |
chi
|
学科代码: |
080202
|
学科名称: |
工学 - 机械 - 机械电子工程
|
学生类型: |
博士
|
学位级别: |
工学博士
|
学位年度: |
2024
|
培养单位: |
西安科技大学
|
院系: |
机械工程学院
|
专业: |
机械工程
|
研究方向: |
智能检测与控制
|
第一导师姓名: |
张旭辉
|
第一导师单位: |
西安科技大学
|
论文提交日期: |
2024-06-14
|
论文答辩日期: |
2024-05-31
|
论文外文题名: |
Visual positioning of drilling hole and autonomous control method of anchor drilling robot
|
论文中文关键词: |
钻锚机器人 ; 锚杆支护 ; 锚固孔视觉识别 ; 逆运动学 ; 分步视觉伺服控制 ; 视觉定位
|
论文外文关键词: |
Anchor drilling robot ; Roof support ; Identification of drilling hole ; Inverse kinematics ; Step-by-step visual servo ; Visual positioning
|
论文中文摘要: |
︿
<p> 煤矿智能化建设对于煤矿安全、高效、绿色生产具有重要意义。煤矿巷道顶板支护工艺流程复杂,并且钻锚装备自动化智能化程度低,导致煤矿巷道支护速度慢,跟不上巷道掘进的速度,巷道“掘支失衡”问题长期存在,提高钻锚装备自动化智能化水平是提高巷道支护效率,破解难题的关键。因此,本文提出了锚固孔视觉定位与钻锚机器人自主控制方法,利用视觉检测方法实现锚固孔快速识别与精确定位,构建机械臂视觉伺服控制模型,控制机械臂运动实现末端执行器快速精确对准目标锚固孔中心。通过提高钻锚装备自动化智能化程度,实现了提高巷道顶板支护速度的目标。</p>
<p> 针对人工识别定位锚固孔存在的位置偏差大、劳动强度大和安全性差等问题,提出基于参数自适应调整Hough变换(Hough Transform)的锚固孔自动识别检测方法。通过构建成像平面上锚固孔尺寸预测模型,基于钻锚机器人运动学模型自适应调整Hough变换参数,实现锚固孔快速识别检测;利用锚固孔物理信息构建几何约束,筛除无效锚固孔识别检测结果,提高锚固孔识别检测精度;基于轮廓形状和几何约束完成图像立体匹配,构建双目视觉定位模型完成锚固孔空间位置解算,实现煤矿巷道锚固孔快速识别与精确定位,为后续实现钻锚机器人自主控制奠定基础。</p>
<p>针对不符合Pieper准则的机械臂逆运动学求解存在的收敛难和精度差等问题,提出基于最优初始值与迭代步长因子的机械臂逆运动学求解方法。首先构建描述机械臂末端执行器空间位姿的误差函数,基于蒙特卡洛随机采样方法分析机械臂工作空间,利用空间点与工作空间映射关系确定误差函数最小值对应的空间点,将其对应的关节角度和位置作为迭代方法求机械臂逆解的最优初始值,解决了迭代方法随机初始值引起的难以收敛问题;针对迭代过程固定步长导致的收敛时间长或振荡问题,引入迭代步长因子,构建实时动态自适应调整策略,解决了迭代方法容易陷入局部最优的难题,提高了钻锚机器人逆运动学求解精度和速度。</p>
<p> 人工控制钻锚装备完成支护任务存在精度低、安全性差等难题,针对所述问题提出基于分步视觉伺服的钻锚机器人运动控制方法。在锚固孔快速识别与精确定位的基础上,利用五次多项式插值方法进行机械臂轨迹规划,构建基于位置的视觉控制模型,控制机械臂按照规划轨迹运动,实现末端执行器接近目标锚固孔位置,完成机械臂粗控制;当末端执行器与目标位置距离小于设定阈值时,以目标锚固孔外接矩形顶点坐标为图像特征,构建基于图像的视觉伺服控制模型,利用粒子群优化算法进行图像雅可比矩阵估计,设计系统的运动控制率,实现末端执行器快速精确对准目标锚固孔中心,解决了人工控制钻锚装备导致的控制精度低和安全性差的难题。</p>
<p> 为验证锚固孔视觉定位与钻锚机器人自主控制方法的有效性和精确性,搭建钻锚机器人实验平台,进行锚固孔视觉识别检测与定位实验和钻锚机器人视觉伺服控制实验。实验结果表明,本文所述锚固孔视觉识别检测与定位方法精度高、实时性好,能够实现锚固孔快速识别与精确定位;分步视觉伺服控制方法能够控制机械臂运动,实现末端执行器快速精确对准目标锚固孔中心。本文提出的锚固孔视觉定位与钻锚机器人自主控制方法有效提高了煤矿钻锚装备自动化程度,提升了巷道顶板支护效率,有效缓解了“掘支失衡”难题,为综掘工作面智能化建设提供一种新思路。</p>
﹀
|
论文外文摘要: |
︿
<p> The development of intelligent systems in coal mines is critically important for ensuring safe, efficient, and environmentally sustainable production. The process of supporting the roofs of mine tunnels is complex, and the automation and intelligence levels of anchor drilling equipment are low. This results in slow support installation, unable to keep pace with tunnel excavation, leading to a persistent imbalance between excavation and support. Enhancing the automation and intelligence of anchor drilling equipment is key to improving support efficiency and addressing this challenge. Therefore, this study proposes a method for visual positioning of drilling holes and autonomous control of anchor drilling robots. By utilizing visual detection methods, the rapid identification and precise positioning of drilling holes can be achieved. A visual servo control model for the manipulator is constructed to control its movement, enabling the end effector to quickly and accurately align with the center of the target drilling hole. By enhancing the automation and intelligence of anchor drilling equipment, the goal of accelerating the support process for mine tunnel roofs is realized.</p>
<p>To address the issues of large positional deviations, high labor intensity, and poor safety associated with manual identification and positioning of drilling holes, an automatic detection method for drilling holes based on parameter adaptive adjustment of the Hough Transform is proposed. By constructing a prediction model for drilling hole dimensions on the imaging plane and adaptively adjusting the Hough Transform parameters based on the kinematic model of the anchor drilling robot, rapid identification and detection of drilling holes are achieved. Geometric constraints are constructed using the physical information of the drilling holes to filter out invalid detection results, thereby improving detection accuracy. Contour shape and geometric constraints are used for image stereo matching, constructing a binocular vision positioning model to calculate the spatial position of the drilling holes. This enables rapid identification and precise positioning of drilling holes in coal mine tunnels, laying the foundation for the subsequent autonomous control of the anchor drilling robot.</p>
<p>To address the issues of convergence difficulty and low accuracy in solving the inverse kinematics of manipulators that do not conform to Pieper's criterion, a method for solving the inverse kinematics of manipulators based on optimal initial values and dynamic iterative step coefficient is proposed. First, an error function describing the spatial pose of the manipulator's end effector is constructed. The Monte Carlo random sampling method is used to analyze the manipulator's workspace, and the mapping relationship between spatial points and the workspace is utilized to determine the spatial point corresponding to the minimum error function value. The joint angles and positions corresponding to this point are taken as the optimal initial values for the iterative method to solve the inverse kinematics, addressing the convergence issues caused by random initial values in iterative methods. To tackle the problems of long convergence time or oscillation caused by a fixed step size in the iterative process, an iterative step coefficient is introduced to construct a real-time dynamic adaptive adjustment strategy. This approach mitigates the risk of the iterative method getting trapped in local optima, thereby enhancing the accuracy and speed of solving the inverse kinematics for the manipulator of the anchor drilling robot.</p>
<p>Manual control of anchor drilling equipment for support tasks faces challenges such as low precision and poor safety. To address these issues, a stepwise visual servo control method for the manipulator of an anchor drilling robot is proposed. Based on the rapid identification and precise positioning of drilling holes, the manipulator's trajectory is planned using a quintic polynomial interpolation method. A position-based visual control model is constructed to control the manipulator's movement along the planned trajectory, allowing the end effector to approach the target drilling hole position, thus achieving coarse control of the manipulator. When the distance between the end effector and the target position is less than a set threshold, the coordinates of the vertices of the bounding rectangle around the target drilling hole are used as image features. An image-based visual servo control model is then constructed. The particle swarm optimization (PSO) algorithm is employed to estimate the image Jacobian matrix, and the system's motion control rate is designed to enable the end effector to quickly and accurately align with the center of the target drilling hole. This approach resolves the issues of low control precision and poor safety associated with manual control of anchor drilling equipment.</p>
<p>To validate the effectiveness and accuracy of the visual positioning and autonomous control methods for the anchor drilling robot, an experimental platform was constructed. Experiments were conducted on the visual identification and positioning of drilling holes as well as the visual servo control of the manipulator of the anchor drilling robot. The experimental results demonstrated that the proposed method for visual identification and positioning of drilling holes is highly accurate and offers good real-time performance, enabling rapid and precise identification and positioning of drilling holes. The stepwise visual servo control method effectively controls the manipulator's movement, allowing the end effector to quickly and accurately align with the center of the target drilling hole. The proposed methods for visual positioning and autonomous control of the anchor drilling robot significantly enhance the automation of coal mine anchor drilling equipment, improve the efficiency of tunnel roof support, and effectively mitigate the issue of "excavation-support imbalance." This provides a novel approach for the intelligent development of comprehensive tunneling operations.</p>
﹀
|
参考文献: |
︿
[1] 王国法, 刘峰, 庞义辉, 等. 煤矿智能化——煤炭工业高质量发展的核心技术支撑[J]. 煤炭学报, 2019, 44(02): 349-357. [2] 刘峰, 曹文君, 张建明, 等. 我国煤炭工业科技创新进展及“十四五”发展方向[J]. 煤炭学报, 2021, 46(01): 1-15. [3] 王国法. 煤矿智能化最新技术进展与问题探讨[J]. 煤炭科学技术, 2022, 50(01): 1-27. [4] 王海军, 曹云, 王洪磊. 煤矿智能化关键技术研究与实践[J]. 煤田地质与勘探, 2023, 51(01): 44-54. [5] 王国法, 杜毅博, 任怀伟, 等. 智能化煤矿顶层设计研究与实践[J]. 煤炭学报, 2020, 45(06): 1909-1924. [6] 魏文艳. 综采工作面智能化开采技术发展现状及展望[J]. 煤炭科学技术, 2022, 50(S2): 244-253. [7] 胡青松, 钱建生, 李世银, 等. 再论智能煤矿建设路线——基于人工智能3.0视角[J]. 煤炭科学技术, 2022, 50(01): 256-264. [8] 王国法, 徐亚军, 张金虎, 等. 煤矿智能化开采新进展[J]. 煤炭科学技术, 2021, 49(01): 1-10. [9] 康红普, 姜鹏飞, 高富强, 等. 掘进工作面围岩稳定性分析及快速成巷技术途径[J]. 煤炭学报, 2021, 46(07): 2023-2045. [10] 王步康. 煤矿巷道掘进技术与装备的现状及趋势分析[J]. 煤炭科学技术, 2020, 48(11): 1-11. [11] 马宏伟, 王世斌, 毛清华, 等. 煤矿巷道智能掘进关键共性技术[J]. 煤炭学报, 2021, 46(01): 310-320. [12] 王虹, 王步康, 张小峰, 等. 煤矿智能快掘关键技术与工程实践[J]. 煤炭学报, 2021, 46(07): 2068-2083. [13] 程卫民, 刘向升, 阮国强, 等. 煤巷锚掘快速施工的封闭控尘理论与技术工艺[J]. 煤炭学报, 2009, 34(02): 203-207. [14] 梁小康, 梁明, 袁超峰. ABM20型掘锚一体机在冯家塔矿的应用现状[J]. 煤矿机械, 2019, 40(05): 149-151. [15] 吴建星, 吴拥政. 基于掘锚一体机的煤巷锚杆支护参数及施工工艺优化研究[J]. 煤矿开采, 2014, 19(06): 64-67. [16] 张小峰. 久益12CM系列与山特维克MB系列掘锚一体机比较[J]. 煤矿机械, 2019, 40(03): 57-58. [17] 王佃武. EJM340/4-2掘锚一体机在凉水井矿的应用[J]. 煤矿机械, 2019, 40(03): 118-120. [18] 马宏伟, 杨金科, 毛清华, 等. 煤矿护盾式掘进机器人系统精确定位研究[J]. 工矿自动化, 2022, 48(03): 63-70. [19] 康红普, 姜鹏飞, 王子越, 等. 煤巷钻锚一体化快速掘进技术与装备及应用[J]. 煤炭学报, 2024, 49(01): 131-151. [20] Chen Y, Wang H, Pang Y, et al. An infrared small target detection method based on a weighted human visual comparison mechanism for safety monitoring[J]. Remote Sensing, 2023, 15(11): 2922. [21] Wang Y, Feng W, Jiang K, et al. Real-time damaged building region detection based on improved YOLOv5s and embedded system from UAV images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, 16: 4205-4217. [22] Feng D, Harakeh A, Waslander S L, et al. A review and comparative study on probabilistic object detection in autonomous driving[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 23(8): 9961-9980. [23] Woo J, Baek J H, Jo S H, et al. A study on object detection performance of YOLOv4 for autonomous driving of tram[J]. Sensors, 2022, 22(22): 9026. [24] Liang S, Wu H, Zhen L, et al. Edge YOLO: real-time intelligent object detection system based on edge-cloud cooperation in autonomous vehicles[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(12): 25345-25360. [25] Ahmadian K, Reza H. X‐ray medical image super‐resolution via self‐organization neural networks and geometric directional gradient[J]. IET Image Processing, 2022, 16(14): 3910-3928. [26] 邓佳丽, 龚海刚, 刘明. 基于目标检测的医学影像分割算法[J]. 电子科技大学学报, 2023, 52(02): 254-262. [27] Weng Y, Sun Y, Jiang D, et al. Enhancement of real-time grasp detection by cascaded deep convolutional neural networks[J]. Concurrency and Computation: Practice and Experience, 2021, 33(5): e5976. [28] 王宸, 张秀峰, 刘超, 等. 改进YOLOv3的轮毂焊缝缺陷检测[J]. 光学精密工程, 2021, 29(08): 1942-1954. [29] He K, Zhang X, Ren S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916. [30] Girshick R, Iandola F, Darrell T, et al. Deformable part models are convolutional neural networks[C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015: 437-446. [31] Ren S, He K, Girshick R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. [32] Chen Z, Wu K, Li Y, et al. SSD-MSN: an improved multi-scale object detection network based on SSD[J]. IEEE Access, 2019, 7: 80622-80632. [33] Sun J, Xu Z, Liang S. NSD-SSD: A novel real-time ship detector based on convolutional neural network in surveillance video[J]. Computational Intelligence and Neuroscience, 2021, 2021: 1-16. [34] Zhao Y, Jia J, Liu D, et al. He-YOLO: Aerial target detection based on improved YOLOv3[J]. International Journal of Pattern Recognition and Artificial Intelligence, 2021, 35(13): 2150036. [35] Jiang P, Ergu D, Liu F, et al. A review of YOLO algorithm developments[J]. Procedia Computer Science, 2022, 199: 1066-1073. [36] Gongal A, Amatya S, Karkee M, et al. Sensors and systems for fruit detection and localization: a review[J]. Computers and Electronics in Agriculture, 2015, 116: 8-19. [37] Rätsch G, Onoda T, Müller K R. Soft margins for AdaBoost[J]. Machine Learning, 2001, 42: 287-320. [38] 罗陆锋, 邹湘军, 王成琳, 等. 基于轮廓分析的双串叠贴葡萄目标识别方法[J]. 农业机械学报, 2017, 48(06): 15-22. [39] 李立君, 阳涵疆. 基于改进凸壳理论的遮挡油茶果定位检测算法[J]. 农业机械学报, 2016, 47(12): 285-292+346. [40] Wei X, Jia K, Lan J, et al. Automatic method of fruit object extraction under complex agricultural background for vision system of fruit picking robot[J]. Optik, 2014, 125(19): 5684-5689. [41] Lin G, Tang Y, Zou X, et al. In-field citrus detection and localisation based on RGB-D image analysis[J]. Biosystems Engineering, 2019, 186: 34-44. [42] Zhuo S, Huang Y. CHMM object detection based on polygon contour features by PSM[J]. Sensors, 2022, 22(17): 6556. [43] Peng J, Xu W, Yan L, et al. A pose measurement method of a space noncooperative target based on maximum outer contour recognition[J]. IEEE Transactions on Aerospace and Electronic Systems, 2019, 56(1): 512-526. [44] Zhu J, Qiu X, Pan Z, et al. Projection shape template-based ship target recognition in TerraSAR-X images[J]. IEEE Geoscience and Remote Sensing Letters, 2016, 14(2): 222-226. [45] Tan J, Fan X, Wang S, et al. Target recognition of SAR images by partially matching of target outlines[J]. Journal of Electromagnetic Waves and Applications, 2019, 33(7): 865-881. [46] Viola P, Jones M J. Robust real-time face detection[J]. International Journal of Computer Vision, 2004, 57: 137-154. [47] 赵源深, 贡亮, 周斌, 等. 番茄采摘机器人非颜色编码化目标识别算法研究[J]. 农业机械学报, 2016, 47(07): 1-7. [48] 王震, 褚桂坤, 张宏建, 等. 基于无人机可见光图像Haar-like特征的水稻病害白穂识别[J]. 农业工程学报, 2018, 34(20): 73-82. [49] Tian X, Feng H, Chen J. An industrial production line dynamic target tracking system based on HAAR and CAMSHIFT[J]. International Journal of Pattern Recognition and Artificial Intelligence, 2020, 34(11): 2059037. [50] Besnassi M, Neggaz N, Benyettou A. Face detection based on evolutionary Haar filter[J]. Pattern Analysis and Applications, 2020, 23: 309-330. [51] Kumar S, Singh S, Kumar J. Automatic live facial expression detection using genetic algorithm with haar wavelet features and SVM[J]. Wireless Personal Communications, 2018, 103(3): 2435-2453. [52] 聂海涛, 龙科慧, 马军, 等. 采用改进尺度不变特征变换在多变背景下实现快速目标识别[J]. 光学精密工程, 2015, 23(08): 2349-2356. [53] Hossein Z, Nasri M. Adaptive RANSAC and extended region-growing algorithm for object recognition over remote-sensing images[J]. Multimedia Tools and Applications, 2022, 81(22): 31685-31708. [54] 王瑞, 杜林峰, 孙督, 等. 复杂场景下结合SIFT与核稀疏表示的交通目标分类识别[J]. 电子学报, 2014, 42(11): 2129-2134. [55] Jin R, Kim J. Tracking feature extraction techniques with improved SIFT for video identification[J]. Multimedia Tools and Applications, 2017, 76: 5927-5936. [56] Navneet D. Histograms of oriented gradients for human detection[C]//2005 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2005, 2: 886-893. [57] Felzenszwalb P, McAllester D, Ramanan D. A discriminatively trained, multiscale, deformable part model[C]//2008 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2008: 1-8. [58] 柳晨光, 初秀民, 谢朔, 等. 基于单目视觉的水面船舶多目标定位方法[J]. 交通运输工程学报, 2015, 15(05): 91-100. [59] 吴愿, 薛培林, 殷国栋, 等. 基于特征融合的双模态低辨识度目标识别[J]. 中国机械工程, 2021, 32(10): 1205-1212+1221. [60] 肖德琴, 冯爱晶, 杨秋妹, 等. 基于视频追踪的猪只运动快速检测方法[J]. 农业机械学报, 2016, 47(10): 351-357+331. [61] 袁文博, 陈尔奎, 曹志强, 等. 基于局部特征融合的目标再识别方法[J]. 华中科技大学学报(自然科学版), 2015, 43(S1): 192-195. [62] 钟德星, 杨元, 刘瑞玲, 等. 基于单目视觉的装配机器人研究及应用[J]. 西安交通大学学报, 2018, 52(05): 81-87. [63] Li H, Zhao Q, Li X, et al. Object detection based on color and shape features for service robot in semi-structured indoor environment[J]. International Journal of Intelligent Robotics and Applications, 2019, 3(4): 430-442. [64] Chen Z, Wang X, Fan T, et al. Color-depth multi-task learning for object detection in haze[J]. Neural Computing and Applications, 2020, 32: 6591-6599. [65] Kucuk S, Bingul Z. Robot kinematics: forward and inverse kinematics[M]. London, UK: INTECH Open Access Publisher, 2006. [66] Goldenberg A, Benhabib B, Fenton R. A complete generalized solution to the inverse kinematics of robots[J]. IEEE Journal on Robotics and Automation, 1985, 1(1): 14-20. [67] Tong Y, Liu J, Liu Y, et al. Analytical inverse kinematic computation for 7-DOF redundant sliding manipulators[J]. Mechanism and Machine Theory, 2021, 155: 104006. [68] Zaplana I, Hadfield H, Lasenby J. Closed-form solutions for the inverse kinematics of serial robots using conformal geometric algebra[J]. Mechanism and Machine Theory, 2022, 173: 104835. [69] Lin P F, Huang M B, Huang H P. Analytical solution for inverse kinematics using dual quaternions[J]. IEEE Access, 2019, 7: 166190-166202. [70] 罗任峰, 王旭浩, 张大卫, 等. 一种6R非球型手腕机器人逆运动学算法研究[J]. 机械工程学报, 2022, 58(19): 68-76. [71] 隋涛, 孔刘君, 姜昊, 等. 基于位姿分离的机械臂解析求逆优化算法[J]. 沈阳工业大学学报, 2023, 45(05): 540-545. [72] 刘华山, 朱世强, 吴剑波, 等. 基于向量内积的机器人实时逆解算法[J]. 农业机械学报, 2009, 40(06): 212-216+207. [73] 李光, 谭薪兴, 肖帆, 等. 基于改进适应度函数组合法的机器人逆运动学求解[J]. 农业机械学报, 2022, 53(10): 436-445. [74] Pieper D L. The kinematics of manipulators under computer control[M]. Stanford University, 1969. [75] KöKer R I. A genetic algorithm approach to a neural-network-based inverse kinematics solution of robotic manipulators based on error minimization[J]. Information Sciences, 2013, 222: 528-543. [76] Liu Y, Yi W, Feng Z, et al. Design and motion planning of a 7-DOF assembly robot with heavy load in spacecraft module[J]. Robotics and Computer-Integrated Manufacturing, 2024, 86: 102645. [77] Deng H, Xie C. An improved particle swarm optimization algorithm for inverse kinematics solution of multi-DOF serial robotic manipulators[J]. Soft Computing, 2021, 25(21): 13695-13708. [78] Rokbani N, Neji B, Slim M, et al. A multi-objective modified PSO for inverse kinematics of a 5-DOF robotic arm[J]. Applied Sciences, 2022, 12(14): 7091. [79] Chen Y T, Chen W J. Optimizing the obstacle avoidance trajectory and positioning error of robotic manipulators using multigroup ant colony and quantum behaved particle swarm optimization algorithms[J]. International Journal of Innovative Computing, Information and Control, 2021, 17(2): 595-611. [80] Singh G, Banga V K. Kinematics and trajectory planning analysis based on hybrid optimization algorithms for an industrial robotic manipulator[J]. Soft Computing, 2022, 26(21): 11339-11372. [81] Hsieh Y Z, Xu F X, Lin S S. Deep convolutional generative adversarial network for inverse kinematics of self-assembly robotic arm based on the depth sensor[J]. IEEE Sensors Journal, 2022, 23(1): 758-765. [82] Li H, Song Z, Jiang Z, et al. Solving inverse kinematics of industrial robot based on BP neural network[C]//2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems. IEEE, 2017: 1167-1171. [83] 张立博, 李宇鹏, 朱德明, 等. 基于遗传算法的护理机器人逆运动学求解方法[J]. 北京航空航天大学学报, 2022, 48(10): 1925-1932. [84] 杨惠珍, 刘西洋. 基于改进自适应小生境遗传算法的机械臂逆运动学求解[J]. 西北工业大学学报, 2019, 37(03): 488-495. [85] 李翠明, 任柯州, 龚俊. 基于改进遗传算法的清洁移动机械臂运动学逆解[J]. 太阳能学报, 2022, 43(07): 180-185. [86] 张铮, 朱齐丹, 吕晓龙, 等. 冗余机械臂运动学逆解的求解优化方法[J]. 吉林大学学报(工学版), 2023, 53(12): 3379-3387. [87] 陈卓凡, 周坤, 秦菲菲, 等. 基于改进量子粒子群优化算法的机器人逆运动学求解[J]. 中国机械工程, 2024, 35(02): 293-304. [88] Mao B, Xie Z, Wang Y, et al. A hybrid differential evolution and particle swarm optimization algorithm for numerical kinematics solution of remote maintenance manipulators[J]. Fusion Engineering and Design, 2017, 124: 587-590. [89] Ames B, Morgan J, Konidaris G. Ikflow: generating diverse inverse kinematics solutions[J]. IEEE Robotics and Automation Letters, 2022, 7(3): 7177-7184. [90] Kouabon A G J, Melingui A, Ahanda J J B M, et al. A learning framework to inverse kinematics of high DOF redundant manipulators[J]. Mechanism and Machine Theory, 2020, 153: 103978. [91] Wang X, Liu X, Chen L, et al. Deep-learning damped least squares method for inverse kinematics of redundant robots[J]. Measurement, 2021, 171: 108821. [92] 芮宏斌, 曹伟, 孙宁宁. 基于BP神经网络的光伏阵列清洁机械臂逆运动学分析与时间最短运动规划[J]. 太阳能学报, 2022, 43(10): 43-51. [93] 肖帆, 李光, 游雨龙. 空间3R机械手逆向运动学的多模块神经网络求解[J]. 中国机械工程, 2019, 30(10): 1233-1238. [94] Tchoń K, Janiak M. Repeatable approximation of the Jacobian pseudo-inverse[J]. Systems & Control Letters, 2009, 58(12): 849-856. [95] Park S O, Lee M C, Kim J. Trajectory planning with collision avoidance for redundant robots using jacobian and artificial potential field-based real-time inverse kinematics[J]. International Journal of Control, Automation and Systems, 2020, 18(8): 2095-2107. [96] Xu J, Song K, He Y, et al. Inverse kinematics for 6-DOF serial manipulators with offset or reduced wrists via a hierarchical iterative algorithm[J]. IEEE Access, 2018, 6: 52899-52910. [97] Li J, Yu H, Shen N Y, et al. A novel inverse kinematics method for 6-DOF robots with nonspherical wrist[J]. Mechanism and Machine Theory, 2021, 157: 104180. [98] Nguyen V, Marvel J A. Modeling of industrial robot kinematics using a hybrid analytical and statistical approach[J]. Journal of Mechanisms and Robotics, 2022, 14(5): 051009. [99] 韩磊, 刁燕, 张希斌, 等. 基于改进牛顿迭代法的手腕偏置型六自由度关节机器人逆解算法[J]. 机械传动, 2017, 41(01): 127-130+150. [100] 廖波, 尚建忠, Ernest Appleton, 等. 基于遍历迭代法的五自由度拟人手臂逆运动学求解[J]. 机械设计与研究, 2011, 27(05): 25-28. [101] 刘志恒, 赵立军, 李瑞峰, 等. 面向轮毂磨抛的手腕偏置机器人运动学快速求解方法[J]. 机械工程学报, 2022, 58(14): 126-136. [102] Wang F, Ye W, Kang X, et al. FABRIKV: a fast, iterative inverse kinematics solver for surgical continuum robot with variable curvature model[C]//2023 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2023: 8417-8424. [103] Marić F, Giamou M, Hall A W, et al. Riemannian optimization for distance-geometric inverse kinematics[J]. IEEE Transactions on Robotics, 2021, 38(3): 1703-1722. [104] 张英, 黄起能, 廖启征, 等. 空间6R串联机械手逆运动学分析的新方法研究[J]. 机械工程学报, 2022, 58(19): 1-11. [105] 黄晨华. 工业机器人运动学逆解的几何求解方法[J]. 制造业自动化, 2014, 36(15): 109-112. [106] 梁春苗, 姚宁平, 彭光宇, 等. 煤矿井下瓦斯抽采钻孔机器人钻臂运动学研究[J]. 煤矿机械, 2021, 42(01): 30-33. [107] 周东旭, 谢明佐, 宣鹏程, 等.煤矿轻型协作机械臂逆运动学解算与验证[J]. 煤炭学报, 2019, 44(S2): 791-799. [108] 朱明鎏, 李梁. 煤矿抓管机器人冗余机械臂逆运动学求解算法[J]. 煤矿安全, 2023, 54(12): 233-238. [109] 赵昊, 马宏伟, 王川伟, 等. 复杂地质条件煤矿巷道多履带钻锚机器人运动学研究[J]. 煤炭工程, 2022, 54(02): 121-126. [110] Hutchinson S, Hager G D, Corke P I. A tutorial on visual servo control[J]. IEEE Transactions on Robotics and Automation, 1996, 12(5): 651-670. [111] Chaumette F, Hutchinson S. Visual servo control. I. basic approaches[J]. IEEE Robotics & Automation Magazine, 2006, 13(4): 82-90. [112] Chaumette F, Hutchinson S. Visual servo control. II. advanced approaches[J]. IEEE Robotics & Automation Magazine, 2007, 14(1): 109-118. [113] 贾丙西, 刘山, 张凯祥, 等. 机器人视觉伺服研究进展:视觉系统与控制策略[J]. 自动化学报, 2015, 41(05): 861-873. [114] 徐德. 单目视觉伺服研究综述[J]. 自动化学报, 2018, 44(10): 1729-1746. [115] 陶波, 龚泽宇, 丁汉. 机器人无标定视觉伺服控制研究进展[J]. 力学学报, 2016, 48(04): 767-783. [116] Flandin G, Chaumette F, Marchand E. Eye-in-hand/eye-to-hand cooperation for visual servoing[C]//2000 IEEE International Conference on Robotics and Automation. IEEE, 2000, 3: 2741-2746. [117] Muis A, Ohnishi K. Eye-to-hand approach on eye-in-hand configuration within real-time visual servoing[J]. IEEE/ASME Transactions on Mechatronics, 2005, 10(4): 404-410. [118] Jeddi M, Khoogar A, Mehdipoor O A. Eye-in-hand stereo image based visual servoing for robotic assembly and set-point calibration used on 4-DOF SCARA robot[J]. International Journal of Robotics, Theory and Applications, 2022, 8(1): 33-44. [119] Chang W C. Robotic assembly of smartphone back shells with eye-in-hand visual servoing[J]. Robotics and Computer-Integrated Manufacturing, 2018, 50: 102-113. [120] Carlson F B, Johansson R, Robertsson A. Six-DOF eye-to-hand calibration from 2D measurements using planar constraints[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2015: 3628-3632. [121] Taryudi, Wang M S. Eye to hand calibration using ANFIS for stereo vision-based object manipulation system[J]. Microsystem Technologies, 2018, 24: 305-317. [122] Shi H, Chen J, Pan W, et al. Collision avoidance for redundant robots in position-based visual servoing[J]. IEEE Systems Journal, 2018, 13(3): 3479-3489. [123] Lin J, Wang Y, Miao Z, et al. Low-complexity control for vision-based landing of quadrotor UAV on unknown moving platform[J]. IEEE Transactions on Industrial Informatics, 2021, 18(8): 5348-5358. [124] Jin Z, Wu J, Liu A, et al. Policy-based deep reinforcement learning for visual servoing control of mobile robots with visibility constraints[J]. IEEE Transactions on Industrial Electronics, 2021, 69(2): 1898-1908. [125] Lee S, Chwa D. Dynamic image-based visual servoing of monocular camera mounted omnidirectional mobile robots considering actuators and target motion via fuzzy integral sliding mode control[J]. IEEE Transactions on Fuzzy Systems, 2020, 29(7): 2068-2076. [126] Wang H, Ni H, Wang J, et al. Hybrid vision/force control of soft robot based on a deformation model[J]. IEEE Transactions on Control Systems Technology, 2019, 29(2): 661-671. [127] Wang Y, Wang H, Liu Z, et al. Visual servo-collision avoidance hybrid task by considering detection and localization of contact for a soft manipulator[J]. IEEE/ASME Transactions on Mechatronics, 2020, 25(3): 1310-1321. [128] Dong G, Zhu Z H. Position-based visual servo control of autonomous robotic manipulators[J]. Acta Astronautica, 2015, 115: 291-302. [129] Gan J, Yang C, Cheng L, et al. Dual arm cooperation based on visual servo control[C]//2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems. IEEE, 2019: 289-294. [130] 成天佑, 林艳萍, 马晓军. 一种视触觉引导的超声探头自动定位方法[J]. 西安电子科技大学学报, 2020, 47(01): 80-87. [131] Li J, Tian J, Zhai X. Measuring point localization and sensor pose control for gas insulated switchgear partial discharge detection robot[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-10. [132] 陈引娟, 安广琳, 杜亚江, 等. 基于几何特征的铆接件精确装配视觉伺服控制方法[J]. 铁道科学与工程学报, 2023, 20(12): 4744-4754. [133] Yang Y, Liu S. Multi-object robot visual servo based on YOLOv3[C]//2023 IEEE 12th Data Driven Control and Learning Systems Conference. IEEE, 2023: 893-898. [134] Yan S, Tao X, Xu D. Image-based visual servoing system for components alignment using point and line features[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-11. [135] Xu D, Lu J, Wang P, et al. Partially decoupled image-based visual servoing using different sensitive features[J]. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2017, 47(8): 2233-2243. [136] Cao C, Ouyang Q, Su H, et al. Investigation of IBVS control method utilizing vanishing vector subject to spatial constraint[J]. Measurement, 2023, 220: 113376. [137] Mangkusasmito F, Nugroho T H, Trilaksono B R, et al. Visual servo strategies using linear quadratic Gaussian (LQG) for yaw-pitch camera platform[C]//2018 International Conference on Signals and Systems. IEEE, 2018: 146-150. [138] Ghasemi A, Li P, Xie W F. Adaptive switch image-based visual servoing for industrial robots[J]. International Journal of Control, Automation and Systems, 2020, 18(5): 1324-1334. [139] Li, S., Ghasemi, A., Xie, W. et al. An enhanced IBVS controller of a 6-DOF manipulator using hybrid PD-SMC method[J]. International Journal of Control Automation and System. 2018, 16 (2): 844-855. [140] Shi H, Shi L, Sun G, et al. Adaptive image-based visual servoing for hovering control of quad-rotor[J]. IEEE Transactions on Cognitive and Developmental Systems, 2019, 12(3): 417-426. [141] He S, Xu Y, Li D, et al. Eye-in-hand visual servoing control of robot manipulators based on an input mapping method[J]. IEEE Transactions on Control Systems Technology, 2022, 31(1): 402-409. [142] Niu M, Lu Z, Chen L, et al. VERGNet: visual enhancement guided robotic grasp detection under low-light condition[J]. IEEE Robotics and Automation Letters, 2023, 8(12): 8541-8548. [143] Lang H, Khan M T, Tan K K, et al. Application of visual servo control in autonomous mobile rescue robots[J]. International Journal of Computers Communications & Control, 2016, 11(5): 685-696. [144] Ceren Z, Altuğ E. Image based and hybrid visual servo control of an unmanned aerial vehicle[J]. Journal of Intelligent & Robotic Systems, 2012, 65(1): 325-344. [145] Li Y R, Lien W Y, Huang Z H, et al. Hybrid visual servo control of a robotic manipulator for cherry tomato harvesting[C]//Actuators. MDPI, 2023, 12(6): 253. [146] Li W, Xiong R. A hybrid visual servo control method for simultaneously controlling a nonholonomic mobile and a manipulator[J]. Frontiers of Information Technology & Electronic Engineering, 2021, 22(2): 141-154. [147] Lv X, Huang X. Fuzzy adaptive kalman filtering based estimation of image jacobian for uncalibrated visual servoing[C]//2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006: 2167-2172. [148] Ren X, Li H, Li Y. Online image jacobian identification using optimal adaptive robust kalman filter for uncalibrated visual servoing[C]//2017 2nd Asia-Pacific Conference on Intelligent Robot Systems. IEEE, 2017: 53-57. [149] Ren X, Li H. Uncalibrated image-based visual servoing control with maximum correntropy kalman filter[J]. IFAC-PapersOnLine, 2020, 53(5): 560-565. [150] Wang F, Sun F, Zhang J, et al. Unscented particle filter for online total image jacobian matrix estimation in robot visual servoing[J]. IEEE Access, 2019, 7: 92020-92029. [151] Wang J, Zhang Z, Liu S, et al. Jacobian estimation with adaptive kalman filter for uncalibrated visual servoing[C]//International Conference on Intelligent Robotics and Applications. Springer International Publishing, 2022: 272-283. [152] 梁喜凤, 彭明, 路杰, 等. 基于自适应无迹卡尔曼滤波的采摘机械手视觉伺服控制方法[J]. 农业工程学报, 2019, 35(19): 230-237. [153] Zhou Z, Zhang R, Zhu Z. Robust kalman filtering with long short-term memory for image-based visual servo control[J]. Multimedia Tools and Applications, 2019, 78: 26341-26371. [154] Zhong X, Zhong X, Peng X. Robots visual servo control with features constraint employing kalman-neural-network filtering scheme[J]. Neurocomputing, 2015, 151: 268-277. [155] Han N, Ren X, Zheng D. Visual servoing control of robotics with a neural network estimator based on spectral adaptive law[J]. IEEE Transactions on Industrial Electronics, 2023, 70(12): 12586-12595. [156] 武慧莹. 基于无模型自适应预测控制的机械臂视觉伺服控制研究[D]. 北京: 北京交通大学, 2022. [157] 吴邦耀. 基于神经网络和滑模控制的机械手视觉伺服研究[D]. 杭州: 浙江理工大学, 2022. [158] 刘孝军, 王飞. 基于AI的煤矿视频智能分析技术[J]. 煤炭科学技术, 2022, 50(S2): 260-264. [159] 杨超宇, 李策, 梁胤程, 等. 基于改进粒子滤波的煤矿视频监控模糊目标检测[J]. 吉林大学学报(工学版), 2017, 47(06): 1976-1985. [160] 司垒, 王忠宾, 熊祥祥, 等. 基于改进U-net网络模型的综采工作面煤岩识别方法[J]. 煤炭学报, 2021, 46(S1): 578-589. [161] Wang H, Liu L, Zhao X, et al. Pre-perception and accurate recognition of coal-rock interface based on active excitation infrared characterization[J]. Journal of Computational Design and Engineering, 2022, 9(5): 2040-2054. [162] 马宏伟, 张烨, 王鹏, 等. 多机械臂煤矸石智能分拣机器人关键共性技术研究[J]. 煤炭科学技术, 2023, 51(01): 427-436. [163] 曹现刚, 吴旭东, 王鹏, 等. 面向煤矸分拣机器人的多机械臂协同策略[J]. 煤炭学报, 2019, 44(S2): 763-774. [164] 张旭辉, 陈鑫, 杨文娟, 等. 基于单激光束信息的掘锚装备视觉定位方法研究[J]. 煤炭科学技术, 2024, 52(01): 311-322. [165] 雷孟宇, 张旭辉, 杨文娟, 等. 煤矿掘进装备视觉位姿检测与控制研究现状与趋势[J]. 煤炭学报, 2021, 46(S2): 1135-1148.
﹀
|
中图分类号: |
TD421
|
开放日期: |
2025-06-14
|