纺织学报 ›› 2023, Vol. 44 ›› Issue (11): 105-112.doi: 10.13475/j.fzxb.20220605501

• 纺织工程 • 上一篇    下一篇

基于机器视觉的空纱筒口定位方法

史伟民1(), 韩思捷1, 屠佳佳1,2, 陆伟健1, 段玉堂1   

  1. 1.浙江理工大学 浙江省现代纺织装备技术重点实验室, 浙江 杭州 310018
    2.浙江机电职业技术学院 自动化学院, 浙江 杭州 310053
  • 收稿日期:2022-06-22 修回日期:2022-12-29 出版日期:2023-11-15 发布日期:2023-12-25
  • 作者简介:史伟民(1965—),男,教授,博士。主要研究方向为纺织机械自动控制。E-mail:swm@zstu.edu.cn
  • 基金资助:
    国家重点研发计划资助项目(2017YFB1304000)

Empty yarn bobbin positioning method based on machine vision

SHI Weimin1(), HAN Sijie1, TU Jiajia1,2, LU Weijian1, DUAN Yutang1   

  1. 1. Key Laboratory of Modern Textile Machinery & Technology of Zhejiang Province, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China
    2. School of Automation, Zhejiang Institute of Mechanical & Electrical Engineering, Hangzhou, Zhejiang 310053, China
  • Received:2022-06-22 Revised:2022-12-29 Published:2023-11-15 Online:2023-12-25

摘要:

为实现换筒桁架机器人空纱筒口定位功能,提出一种深度学习与传统图像处理相结合的定位方法。首先通过改进的Yolov5模型框定图片中纱筒口的位置;然后利用Sobel边缘检测、阈值分割、滤波、闭操作处理框定区域的图像,并通过最小二乘法拟合得到空筒口径及中心坐标;最后利用单目相机小孔成像原理完成纱筒口的定位。结果表明:改进后的模型在检测准确率上达到99.2%,检测速度可达54.3 帧/s,同时模型参数量减小到3.71 M,X轴、Y轴、Z轴方向的定位误差分别控制在1.3、1.9、0.7 mm以内,本文研究结果可满足换筒桁架机器人空筒口定位功能需求。

关键词: 自动换筒, 纱筒, 机器视觉, Yolov5, 椭圆拟合, 单目相机定位

Abstract:

Objective In the automatic production line of circular weft knitting robot, the position of the bobbin mouth will change due to the influence of the gravity of the bobbin, the vibration when the bobbin is released, the sliding and other factors during the automatic bobbin changing process of the bobbin changing robot. All these will affect the grasping of the empty bobbin by the mechanical claw. In order to ensure the accurate grasp of the empty yarn bobbin by the mechanical claw, it is necessary to locate the position of the empty yarn bobbin.

Method Improved Yolov5 model was adopted to frame the position of the bobbin mouth in the picture, then the Sobel edge detection, threshold segmentation, filtering, and closing operations were adopted to process the image of the framed area, and the empty yarn bobbin aperture and center coordinates were obtained by least square fitting. Finally, the pinhole imaging principle of a monocular camera is adopted to locate the bobbin mouth.

Results In terms of model recognition, the accuracy of the four models experimented in this research was more than 99%. The accuracy of the model with the attention mechanism alone was decreased by 0.1%, but the speed was increased. After adding Ghost module alone, the accuracy was reduced by 0.2% relative to the original model, but the number of parameters was reduced by about half of the volume relative to the original model, and the speed is also increased. The accuracy of the new model with Ghost and CBAM modules is 99.2%, the number of parameters is 3.71 M, and the detection speed is 54.3 frames/s. It can be seen that the improved model maintains high accuracy, and has a smaller volume and faster detection speed. The comprehensive performance is better. In terms of yarn bobbin positioning, the maximum absolute values of yarn mouth positioning errors of the drum changing robot on X axis, Y axis and Z axis were 1.3, 1.9 and 0.7 mm, respectively, and the errors of the three axes were all within the allowable range of errors. Under the same testing environment, the detection time of the positioning method based on deep learning is about 2 s, with contrast to the 5 s detection time of the conventional method. The time efficiency of the proposed method was improved by about 150%. Compared with deep learning, conventional algorithms were more susceptible to the influence of environment, which was prone to interference during contour fitting leading to failure in fitting the yarn bobbin correctly. The deep learning method was shown to be able to eliminate the interference of environment effectively and provide favorable conditions for yarn fitting.

Conclusion In this paper, a positioning method combining depth learning with conventional image processing is proposed to solve the problem of empty yarn bobbin opening positioning when the yarn bobbin changing truss robot takes empty yarn bobbin from the yarn frame in the automatic production line of circular weft knitting machine. The improved Yolov5 model is adopted to frame the approximate position of the bobbin mouth, and then the information of the empty yarn bobbin mouth is obtained through image processing of conventional algorithms and ellipse contour fitting. The monocular camera ranging principle is adopted to locate the empty yarn bobbin mouth. The experiment proves that the improved model has smaller volume and higher accuracy, and the positioning result error is also within the allowable range, which basically meets the requirements of empty yarn bobbin positioning function of the yarn bobbin changing truss robot, providing a reference for the improvement and development of automatic yarn bobbin changing technology in the textile industry.

Key words: automatic bobbin changing, yarn bobbin, machine vision, Yolov5, ellipse fitting, monocular camera positioning

中图分类号: 

  • TS103.7

图1

换筒机器人"

图2

纱筒图"

图3

Yolov5网络框架 注:图中320×320×32表示特征图尺寸为320像素×320像素,通道数为32,其它依此类推。"

图4

GhostConv模块"

图5

GhostBottleneck模块"

图6

CBAM网络结构"

图7

模型检测效果"

图8

Sobel算子"

图9

边缘检测结果"

图10

椭圆拟合结果"

图11

测距原理示意图"

图12

检测原理示意图"

表1

模型检测结果"

模型 参数量/M mAP/% FPS/(帧·s-1)
Yolov5 7.02 99.2 49.2
Yolov5+CBAM 7.05 99.1 52.9
Yolov5+Ghost 3.68 99.0 56.8
Yolov5+Ghost+CBAM 3.71 99.2 54.3

表2

定位实验结果"

序号 实际距离/mm 测量距离/mm 误差/mm
X Y Z X Y Z X Y Z
1 150 -27 -14 150.2 -26.4 -13.5 0.2 0.6 0.5
2 150 -27 4 149.1 -26.6 3.6 -0.9 0.4 -0.4
3 150 -7 -18 149.6 -6.7 -17.6 -0.4 0.3 0.4
4 160 -36 -10 160.6 -36.5 -10.2 0.6 -0.5 -0.2
5 160 -36 -19 159.7 -37.3 -19.1 -0.3 -1.3 -0.1
6 160 -26 -19 161.3 -27.9 -19.4 1.3 -1.9 -0.4
7 175 -7 -35 174.2 -7.7 -34.7 -0.8 -0.7 0.3
8 175 -7 -17 175.9 -7.2 -17.7 0.9 -0.2 -0.7
9 175 -7 -26 174.1 -7.2 -26.1 -0.9 -0.2 -0.1
10 222 -8 -3 222.7 -8.2 -3.2 0.7 -0.2 -0.2
11 222 -3 -3 223.2 -3.6 -3.5 1.2 -0.6 -0.5
12 222 -2 -11 220.9 -2.0 -11.3 -1.1 0 -0.3
[1] 李飞飞. 布匹疵点标记在线检测与含疵样片定位的研究[D]. 郑州: 中原工学院, 2015:1-4.
LI Feifei. Research on on-line detection of fabric defect marks and positioning of defective samples[D]. Zhengzhou: Zhongyuan University of Technology, 2015: 1-4.
[2] 王宁. 智能制造推动我国纺织品贸易比较优势的动态转换[J]. 对外经贸实务, 2021(11):36-41.
WANG Ning. Intelligent manufacturing promotes the dynamic transformation of my country's textile trade comparative advantages[J]. Practice in Foreign Economic Relations and Trade, 2021(11): 36-41.
[3] 张文昌, 单忠德, 卢影. 基于机器视觉的纱笼纱杆快速定位方法[J]. 纺织学报, 2020, 41(12):137-143.
ZHANG Wenchang, SHAN Zhongde, LU Ying. Rapid positioning method of sarong yarn rod based on machine vision[J]. Journal of Textile Research, 2020, 41(12):137-143.
[4] TU J, HAN S, SUN L, et al. The method of creel positioning based on monocular vision[J]. Sensors, 2022.DOI:10.3390/s22176657.
[5] 张建新, 李琦. 基于机器视觉的筒子纱密度在线检测系统[J]. 纺织学报, 2020, 41(6):141-146.
ZHANG Jianxin, LI Qi. On-line detection system for cheese density based on machine vision[J]. Journal of Textile Researcgh, 2020, 41(6): 141-146.
[6] SHI Z, SHI W, WANG J. The detection of thread roll's Margin based on computer vision[J]. Sensors, 2021. DOI: 10.3390/s21196331.
[7] 李靖宇, 沈丹峰, 王玉, 等. 基于参数可调均值迁移滤波实现筒子纱的识别检测[J]. 轻工机械, 2021, 39(3):63-68.
LI Jingyu, SHEN Danfeng, WANG Yu, et al. Recognition and detection of cheese yarn based on parameter adjustable mean shift filtering[J]. Light Industry Machinery, 2021, 39(3):63-68.
[8] 金守峰, 林强强, 马秋瑞. 基于对抗神经网络和神经网络模型的筒子纱抓取方法[J]. 毛纺科技, 2020, 48(1):79-84.
JIN Shoufeng, LIN Qiangqiang, MA Qiurui. Cheese grabbing method based on adversarial neural network and neural network model[J]. Wool Textile Journal, 2020, 48(1): 79-84.
[9] 林清平, 张麒麟, 肖蕾. 采用改进YOLOv5网络的遥感图像目标识别方法[J]. 空军预警学院学报, 2021, 35(2): 117-120.
LIN Qingping, ZHANG Qilin, XIAO Lei. Remote sensing image target recognition method using improved YOLOv5 network[J]. Journal of Air Force Early Warning Academy, 2021, 35(2): 117-120.
[10] HAN K, WANG Y, TIAN Q, et al. GhostNet: more features from cheap operations[C] // IEEE/CVF Conference on Computer Vision and Pattern Recogni-tion (CVPR). Seattle:IEEE, 2020: 1577-1586.
[11] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]// Proceedings of the European Conference on Computer Vision (ECCV). Berlin: Springer Press, 2018: 3-19.
[12] 丛玉华, 何啸, 朱惠娟, 等. 基于改进Yolov4-Tiny网络的安全帽监测系统[J]. 电子技术与软件工程, 2021(19): 121-124.
CONG Yuhua, HE Xiao, ZHU Huijuan, et al. Safety helmet monitoring system based on improved Yolov4-Tiny network[J]. Electronic Technology and Software Engineering, 2021(19): 121-124.
[13] 闫坤. 基于PSD与单目视觉的激光跟踪姿态测量方法研究[D]. 武汉: 湖北工业大学, 2020: 26-28.
YAN Kun. Research on attitude measurement method of laser tracking based on PSD and monocular vision[D]. Wuhan: Hubei University of Technology, 2020: 26-28.
[14] 宇文亮. 基于Kinect的物体模型建立及识别定位技术研究[D]. 哈尔滨: 哈尔滨工业大学, 2018: 14-16.
YU Wenliang. Research on object model establishment and recognition and positioning technology based on Kinect[D]. Harbin: Harbin Institute of Technology, 2018: 14-16.
[1] 陆伟健, 屠佳佳, 王俊茹, 韩思捷, 史伟民. 基于改进残差网络的空纱筒识别模型[J]. 纺织学报, 2024, 45(01): 194-202.
[2] 孙磊, 屠佳佳, 毛慧敏, 王俊茹, 史伟民. 针织智能车间自动换筒任务调度技术[J]. 纺织学报, 2023, 44(12): 189-196.
[3] 陈罡, 金贵阳, 吴菁, 罗千. 智能服装缝制关键技术及成套装备研发[J]. 纺织学报, 2023, 44(08): 197-204.
[4] 陈泰芳, 周亚勤, 汪俊亮, 徐楚桥, 李冬武. 基于视觉特征强化的环锭纺细纱断头在线检测方法[J]. 纺织学报, 2023, 44(08): 63-72.
[5] 纪越, 潘东, 马杰东, 宋丽梅, 董九志. 基于机器视觉的弦振动纱线张力非接触检测系统[J]. 纺织学报, 2023, 44(05): 198-204.
[6] 陶静, 汪俊亮, 徐楚桥, 张洁. 基于视觉校准的环锭纺细纱条干特征在线提取方法[J]. 纺织学报, 2023, 44(04): 70-77.
[7] 王斌, 李敏, 雷承霖, 何儒汉. 基于深度学习的织物疵点检测研究进展[J]. 纺织学报, 2023, 44(01): 219-227.
[8] 陈金广, 李雪, 邵景峰, 马丽丽. 改进YOLOv5网络的轻量级服装目标检测方法[J]. 纺织学报, 2022, 43(10): 155-160.
[9] 屠佳佳, 孙磊, 毛慧敏, 戴宁, 朱婉珍, 史伟民. 圆纬机纱架自动换筒技术[J]. 纺织学报, 2022, 43(07): 178-185.
[10] 金守峰, 侯一泽, 焦航, 张鹏, 李宇涛. 基于改进AlexNet模型的抓毛织物质量检测方法[J]. 纺织学报, 2022, 43(06): 133-139.
[11] 周其洪, 彭轶, 岑均豪, 周申华, 李姝佳. 基于机器视觉的细纱接头机器人纱线断头定位方法[J]. 纺织学报, 2022, 43(05): 163-169.
[12] 吕文涛, 林琪琪, 钟佳莹, 王成群, 徐伟强. 面向织物疵点检测的图像处理技术研究进展[J]. 纺织学报, 2021, 42(11): 197-206.
[13] 吴柳波, 李新荣, 杜金丽. 基于轮廓提取的缝纫机器人运动轨迹规划研究进展[J]. 纺织学报, 2021, 42(04): 191-200.
[14] 田宇航, 王绍宗, 张文昌, 张倩. 基于机器视觉的单组分染液浓度快速检测方法[J]. 纺织学报, 2021, 42(03): 115-121.
[15] 冯文倩, 李新荣, 杨帅. 人体轮廓机器视觉检测算法的研究进展[J]. 纺织学报, 2021, 42(03): 190-196.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!