- 无标题文档
查看论文信息

论文中文题名:

 石英坩埚气泡微变检测与微气泡计数方法研究    

姓名:

 郑超    

学号:

 20207223070    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 085400    

学科名称:

 工学 - 电子信息    

学生类型:

 硕士    

学位级别:

 工程硕士    

学位年度:

 2023    

培养单位:

 西安科技大学    

院系:

 通信与信息工程学院    

专业:

 电子与通信工程    

研究方向:

 图像处理    

第一导师姓名:

 赵谦    

第一导师单位:

 西安科技大学    

论文提交日期:

 2023-06-15    

论文答辩日期:

 2023-05-31    

论文外文题名:

 Research on Detection and Counting of Microstructural Changes and Microbubble in Quartz Crucibles    

论文中文关键词:

 石英坩埚 ; 气泡检测 ; YOLOv5s ; 卡尔曼滤波 ; 匈牙利匹配    

论文外文关键词:

 Quartz crucible ; Bubble detection ; YOLOv5s ; Kalman filtering ; Hungaring matching    

论文中文摘要:

       以高纯石英为原料加工制备而成的石英坩埚是熔化硅和生长晶体的一次性使用的重要容器。研究发现氧污染影响着太阳能硅片电池效率,而坩埚透明层中存在的微气泡是单晶硅中氧污染的主要来源。为了及时、准确地统计坩埚透明层气泡,本文设计了一种基于目标检测的空间气泡计数方法,即使在抖动复杂的拍摄环境下仍可保证对坩埚透明层气泡统计的准确性,为石英坩埚质量评价提供数据和技术支持。

       由于图像中小气泡特征不足,传统算法难以提取有效特征进行识别,因此本文基于卷积神经网络对气泡进行特征提取。首先自建坩埚气泡数据集,数据集包含了6217个气泡目标,且充分考虑了大小、遮挡程度和密集程度等因素,极大增强了模型的鲁棒性;其次对YOLOv5s特征提取网络结构进行优化,以降低小气泡细节特征的损失;接着在颈部使用空洞卷积增大特征图感受野来提取全局语义特征;最后在检测层前添加有效通道注意力机制,用来增强重要通道特征的表达能力。实验结果表明,与原YOLOv5s模型相比较,改进后YOLOv5-QCB模型能有效降低小气泡的漏检率,平均检测精度从96.27%提升到98.76%,同时权重缩减了二分之一,检测速度最快可到达82FPS。

由于视频中气泡图像的特征及位置持续变化,本文采用一种基于目标检测的跟踪方法实现帧间气泡的关联计数。该方法以改进后的YOLOv5-QCB作为检测模块,并结合卡尔曼滤波器对气泡目标状态进行预测,接着通过匈牙利算法完成检测框和预测框之间的匹配,实现视频中气泡的跟踪计数。针对透明层空间气泡密集引起的误匹配问题,根据上下帧目标大小不存在突变的特点,改进交并比算法来降低不同目标之间的关联程度。实验结果表明,本文方法有效解决了跟踪过程中由于透明层空间气泡密集导致的误匹配问题,对于密集环境下气泡的计数准确率仍可到达98.81%,可以快速、准确地统计出坩埚空间内气泡的数量及大小,对石英坩埚质量检测具有一定的参考价值。

论文外文摘要:

The quartz crucible, which is processed and prepared from high-purity quartz, is an important single-use contains for melting silicon and growing crystals. Studies have found that oxygen contamination affects the efficiency of silicon solar cell, and the microbubbles in the transparent layer of the crucible are the main source of oxygen contamination in single crystal silicon. In order to accurately and timely count the bubbles in the transparent layer of the crucible, this thesis designs a target detection-based method for counting spatial bubbles. Even in a complex shooting environment with shaking, the accuracy of counting bubbles in the transparent layer of the crucible can be ensured, providing data and technical support for the quality evaluation of quartz crucibles.

Due to the insufficient features of small bubbles in the image, traditional algorithms have difficulty extracting effective features for recognition. Therefore, this thesis proposes a feature extraction method for bubbles based on convolutional neural network. Firstly, a bubble dataset for crucibles is constructed, which includes 6217 bubble targets and fully considers factors such as size, occlusion degree, and density, greatly enhancing the robustness of the model. Secondly, the YOLOv5s feature extraction network structure is optimized to reduce the loss of small bubble detail features. Thirdly, the dilated convolution is used on the neck to increase the receptive field of the feature map for extracting global semantic features. Finally, an effective channel attention mechanism is added before the detection layer to enhance the expression ability of important channel features. Experimental results show that the improved YOLOv5-QCB model can effectively reduce the miss rate of small bubbles compared to the original YOLOv5s model, with an average detection accuracy increased from 96.27% to 98.76%, while the weight is reduced by half, and the detection speed can reach up to 82 FPS.

Due to the continuous variation of bubble characteristics and positions in the video, this thesis proposes a target detection-based tracking method to achieve inter-frame association counting of bubbles. The method employs an improved YOLOv5-QCB as the detection module and combines a Kalman filter to predict the state of the bubble target. Then, the Hungarian algorithm is used to match the detection and predicted boxes, achieving the tracking and counting of bubbles in the video. To address the issue of false matching caused by the dense distribution of bubbles in the transparent layer space, the intersection over union algorithm is improved based on the characteristic that there are no sudden changes in target size between consecutive frames, thereby reducing the association between different targets. The experimental results show that the method in this thesis effectively solves the problem of mis-matching caused by the dense bubbles in the transparent layer space during the tracking process, and the accuracy of bubble counting in the dense environment can still reach 98.81%, which can quickly and accurately count the number and size of bubbles in the crucible space and has certain reference value for the quality inspection of quartz crucibles.

参考文献:

[1] 张大虎,田青越,汪灵,等. 单晶硅生长用石英坩埚的组成与结构特征[J]. 矿物岩石, 2016, 36(1): 12-16.

[2] 梁启超,乔芬,杨健,等. 太阳能电池的研究现状与进展[J]. 中国材料进展, 2019, 38(5): 505-511.

[3] 李小亚,陈炎,郝峰,等. 碲化铋基热电半导体晶体研究[J]. 中国材料进展, 2017, 36(4): 30-38.

[4] Ogbonda C. Crystal growth method of gallium arsenide using czochralski method[J]. Asian Journal of Multidimensional Research(AJMR), 2020, 9(3): 8-14.

[5] 姜雷,刘丁,赵跃,等. CZ法制备单晶硅相变界面形状演变及控制研究[J]. 太阳能学报, 2014, 35(02): 247-252.

[6] Hansen S H. Investigation of bubble distribution and evolution in solar cell quartz crucibles[D]: [硕士学位论文]. Norwegian University of Science and Technology, 2017.

[7] Paulsen O, Rørvik S, Muggerud A, et al. Bubble distribution in fused quartz crucibles studied by micro X-Ray computational tomography. Comparing 2D and 3D analysis[J]. Journal of Crystal Growth, 2019, 520: 96-104.

[8] Hirsch A, Trempa M, Kupka I, et al. Investigation of gas bubble growth in fused silica crucibles for silicon Czochralski crystal growth[J]. Journal of Crystal Growth, 2020, 533(C): 125470.

[9] 扬效霞. 蓝宝石晶棒气泡机器视觉检测方法研究[D]: [硕士学位论文]. 太原理工大学, 2020.

[10] 潘志成,赵陆海波,张彪,等. 多尺度气泡尺寸分布数字图像测量方法研究[J]. 仪器仪表学报, 2019, 40(07): 129-137.

[11] Zhao Q, Li R R, Qian Q U. Research on statistic detection method of micro bubbles in transparent layer of quartz crucible based on image processing[J]. Journal of Crystal Growth, 2021, 556: 125966.

[12] 李蓉蓉. 坩埚气泡特征检测与计量方法研究[D]: [硕士学位论文]. 西安科技大学, 2021.

[13] 朱小倩. 基于图像处理的气泡特征提取研究[D] : [硕士学位论文]. 华中师范大学, 2021.

[14] 王标,周雅兰,王永红. 改进Faster R-CNN网络在电子元件LED气泡缺陷检测中的应用[J]. 电子测量与仪器仪表学报, 2021, 35(09): 136-143.

[15] 廖一鹏,王卫星. 基于Shearlet多尺度边界检测及融合的浮选气泡提取[J]. 光学学报, 2018, 38(03): 351-359.

[16] 刁子健,张寿明. 基于OpenCV的气泡检测系统设计[J]. 电子测量技术, 2021, 44(12): 6-11.

[17] Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.

[18] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[C]//Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2014: 14-28.

[19] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2015: 1-9.

[20] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2016: 770-778.

[21] Howard A, Sandler M, Chu G, et al. Searching for mobilenetv3[C]//In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019, 1314-1324.

[22] 蒋弘毅,王永娟,康锦煜. 目标检测模型及其优化方法综述[J]. 自动化学报, 2021, 47(06): 1232-1255.

[23] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2014: 580-587.

[24] Girshick R. Fast R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision(ICCV). 2015: 1440-1448.

[25] He K M, Zhang X Y, Ren S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916.

[26] Ren S Q, He K M, Girshick R, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015, 39(6): 1137-1149.

[27] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2016: 779-788.

[28] Redmon J, Farhadi A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2017: 7263-7271.

[29] Redmon J, Farhadi A. YOLOV3: An incremental improvement[J]. Optics Laser Technology, 2018, 23(03): 44-57.

[30] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2016: 770-778.

[31] Alexey B, Wang C Y, Hong Y. Yolov4: optimal speed and accuracy of object detection[J]. Computer Vision and Pattern Recognition, 2020, 10(9): 34-51.

[32] Wang C Y, Liao H, Wu Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN[C]//IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2020: 1571-1580.

[33] 李科岑,王晓强,林浩,等. 深度学习中的单阶段小目标检测方法综述[J]. 计算机科学与探索, 2022, 16(01): 41-58.

[34] Chen X, Lv J, Fang Y, et al. Online Detection of Surface Defects Based on Improved YOLOV3[J]. Sensors, 2022, 22(3): 817.

[35] Liu C, Wu Y, Liu J, et al. Improved YOLOv3 Network for Insulator Detection in Aerial Images with Diverse Background Interference[J]. Electronics, 2021, 10(7): 771.

[36] Yu X, Tang S, Cheang C F, et al. Multi-Task Model for Esophageal Lesion Analysis Using Endoscopic Images: Classifification with Image Retrieval and Segmentation with Attention[J]. Sensors, 2022, 22(1): 283.

[37] 王卓,王健,王枭雄,等. 基于改进YOLOv4的自然环境苹果轻量级检测方法[J]. 农业机械学报, 2022, 53(08): 294-302.

[38] 王宇博,马廷淮,陈光明. 基于改进YOLOv5算法的农田杂草检测[J]. 中国农机化学报, 2023, 44(04): 167-173.

[39] Wang K, Liu M Z. YOLOv3-MT: A YOLOv3 using multi-target tracking for vehicle visual detection[J]. Applied Intelligence, 2022, 52(2): 2070-2091.

[40] 陈锋军,朱学岩,周文静,等. 利用无人机航拍视频结合YOLOv3模型和SORT算法统计云杉数量[J]. 农业工程学报, 2021, 37(20): 81-89.

[41] Lin T Y, Maire M, Belongie S, et al. Microsoft coco: common objects in context[C]// European Conference on Computer Vision. Cham: Springer, 2014: 740-755.

[42] Fan D P, Ji G P, Sun G, et al. Camouflaged Object Detection [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). IEEE, 2020: 2774-2784.

[43] 戴永东,王茂飞,唐达獒,等. 基于双视卷积神经网络的输电线路自动巡检[J]. 电力科学与技术学报, 2021, 36(05): 201-210.

[44] 熊朝阳,王婷. 基于卷积神经网络的建筑构件图像识别[J]. 计算机科学, 2021, 48(S1): 51-56.

[45] 张焕,张庆,于纪言. 卷积神经网络中激活函数的性质分析与改进[J]. 计算机仿真, 2022, 39(04): 328-334.

[46] Liu W, Anguelov D, Erhan D, et al. SSD: Single shot multibox detector[C]// European Conference on Computer Vision. Springer, 2016: 21-37.

[47] 罗会兰,陈鸿坤. 基于深度学习的目标检测研究综述[J]. 电子学报, 2020, 48(06): 1230-1239.

[48] Liu S, Qi L, Qin H, et al. Path aggregation network for instance segmentation[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8759-8768.

[49] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7132-7141.

[50] Wang Q, Wu B, Zhu P. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks[C]//IEEE Conference on Computer Vison and Pattern Recognition. 2020: 11531-11539.

[51] Woo S, Park J, Lee J Y, et al. Cbam: Convolutional block attention module[C]// Proceedings of the European conference on computer vision (ECCV). 2018: 3-19.

[52] Bewley A, Ge Z, Ott L, et al. Simple online and realtime tracking[C]//In Proceedings of the 2016 IEEE International Conference on Image Processing(ICIP). 2016: 3464-3468.

[53] Parico A I B, Ahamed T. Real Time Pear Fruit Detection and Counting Using YOLOv4 Models and Deep SORT[J]. Sensors, 2021, 21(14): 4803.

[54] Rodriguez M, Sivic J, Laptev I, et al. Data-driven crowd analysis in videos[C]// Proceedings of the IEEE International Conference on Computer Vision(ICCV). 2011: 1235-1242.

[55] Qin Z, Shelton C R. Improving multi-target tracking via social grouping[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2012: 1972-1978.

[56] Yu J, Jiang Y, Wang Z, et al. Unitbox: An advanced object detection network[C]// Proceedings of the 24th ACM international conference on Multimedia. 2016: 516-520.

[57] Rezatofighi H, Tsoi N, Gwak J Y, et al. Generalized intersection over union: A metric and a loss for bounding box regression[C]// Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 658-666.

[58] Zheng Z, Wang P, Liu W, et al. Distance-IoU loss: Faster and better learning for bounding box regression[C]// Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(07): 12993-13000.

中图分类号:

 TP391.4    

开放日期:

 2023-06-15    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式