Journal of Textile Research ›› 2023, Vol. 44 ›› Issue (11): 105-112.doi: 10.13475/j.fzxb.20220605501

• Textile Engineering • Previous Articles     Next Articles

Empty yarn bobbin positioning method based on machine vision

SHI Weimin1(), HAN Sijie1, TU Jiajia1,2, LU Weijian1, DUAN Yutang1   

  1. 1. Key Laboratory of Modern Textile Machinery & Technology of Zhejiang Province, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China
    2. School of Automation, Zhejiang Institute of Mechanical & Electrical Engineering, Hangzhou, Zhejiang 310053, China
  • Received:2022-06-22 Revised:2022-12-29 Online:2023-11-15 Published:2023-12-25

Abstract:

Objective In the automatic production line of circular weft knitting robot, the position of the bobbin mouth will change due to the influence of the gravity of the bobbin, the vibration when the bobbin is released, the sliding and other factors during the automatic bobbin changing process of the bobbin changing robot. All these will affect the grasping of the empty bobbin by the mechanical claw. In order to ensure the accurate grasp of the empty yarn bobbin by the mechanical claw, it is necessary to locate the position of the empty yarn bobbin.

Method Improved Yolov5 model was adopted to frame the position of the bobbin mouth in the picture, then the Sobel edge detection, threshold segmentation, filtering, and closing operations were adopted to process the image of the framed area, and the empty yarn bobbin aperture and center coordinates were obtained by least square fitting. Finally, the pinhole imaging principle of a monocular camera is adopted to locate the bobbin mouth.

Results In terms of model recognition, the accuracy of the four models experimented in this research was more than 99%. The accuracy of the model with the attention mechanism alone was decreased by 0.1%, but the speed was increased. After adding Ghost module alone, the accuracy was reduced by 0.2% relative to the original model, but the number of parameters was reduced by about half of the volume relative to the original model, and the speed is also increased. The accuracy of the new model with Ghost and CBAM modules is 99.2%, the number of parameters is 3.71 M, and the detection speed is 54.3 frames/s. It can be seen that the improved model maintains high accuracy, and has a smaller volume and faster detection speed. The comprehensive performance is better. In terms of yarn bobbin positioning, the maximum absolute values of yarn mouth positioning errors of the drum changing robot on X axis, Y axis and Z axis were 1.3, 1.9 and 0.7 mm, respectively, and the errors of the three axes were all within the allowable range of errors. Under the same testing environment, the detection time of the positioning method based on deep learning is about 2 s, with contrast to the 5 s detection time of the conventional method. The time efficiency of the proposed method was improved by about 150%. Compared with deep learning, conventional algorithms were more susceptible to the influence of environment, which was prone to interference during contour fitting leading to failure in fitting the yarn bobbin correctly. The deep learning method was shown to be able to eliminate the interference of environment effectively and provide favorable conditions for yarn fitting.

Conclusion In this paper, a positioning method combining depth learning with conventional image processing is proposed to solve the problem of empty yarn bobbin opening positioning when the yarn bobbin changing truss robot takes empty yarn bobbin from the yarn frame in the automatic production line of circular weft knitting machine. The improved Yolov5 model is adopted to frame the approximate position of the bobbin mouth, and then the information of the empty yarn bobbin mouth is obtained through image processing of conventional algorithms and ellipse contour fitting. The monocular camera ranging principle is adopted to locate the empty yarn bobbin mouth. The experiment proves that the improved model has smaller volume and higher accuracy, and the positioning result error is also within the allowable range, which basically meets the requirements of empty yarn bobbin positioning function of the yarn bobbin changing truss robot, providing a reference for the improvement and development of automatic yarn bobbin changing technology in the textile industry.

Key words: automatic bobbin changing, yarn bobbin, machine vision, Yolov5, ellipse fitting, monocular camera positioning

CLC Number: 

  • TS103.7

Fig. 1

Bobbin changing robot. (a) Overall picture;(b) End effector"

Fig. 2

Bobbin picture. (a) Cylindrical bobbin;(b) Conical bobbin"

Fig. 3

Yolov5 network framework"

Fig. 4

GhostConv module"

Fig. 5

GhostBottleneck module"

Fig. 6

CBAM network structure"

Fig. 7

Model checking effect"

Fig. 8

Sobel operator.(a) X direction;(b) Y direction"

Fig. 9

Edge detection results. (a) Edge detection;(b) Threshold segmentation; (c) Median filter;(d) Closed operation"

Fig. 10

Ellipse fitting result"

Fig. 11

Schematic diagram of ranging principle"

Fig. 12

Schematic diagram of detection principle"

Tab. 1

Model detection results"

模型 参数量/M mAP/% FPS/(帧·s-1)
Yolov5 7.02 99.2 49.2
Yolov5+CBAM 7.05 99.1 52.9
Yolov5+Ghost 3.68 99.0 56.8
Yolov5+Ghost+CBAM 3.71 99.2 54.3

Tab. 2

Positioning experiment results"

序号 实际距离/mm 测量距离/mm 误差/mm
X Y Z X Y Z X Y Z
1 150 -27 -14 150.2 -26.4 -13.5 0.2 0.6 0.5
2 150 -27 4 149.1 -26.6 3.6 -0.9 0.4 -0.4
3 150 -7 -18 149.6 -6.7 -17.6 -0.4 0.3 0.4
4 160 -36 -10 160.6 -36.5 -10.2 0.6 -0.5 -0.2
5 160 -36 -19 159.7 -37.3 -19.1 -0.3 -1.3 -0.1
6 160 -26 -19 161.3 -27.9 -19.4 1.3 -1.9 -0.4
7 175 -7 -35 174.2 -7.7 -34.7 -0.8 -0.7 0.3
8 175 -7 -17 175.9 -7.2 -17.7 0.9 -0.2 -0.7
9 175 -7 -26 174.1 -7.2 -26.1 -0.9 -0.2 -0.1
10 222 -8 -3 222.7 -8.2 -3.2 0.7 -0.2 -0.2
11 222 -3 -3 223.2 -3.6 -3.5 1.2 -0.6 -0.5
12 222 -2 -11 220.9 -2.0 -11.3 -1.1 0 -0.3
[1] 李飞飞. 布匹疵点标记在线检测与含疵样片定位的研究[D]. 郑州: 中原工学院, 2015:1-4.
LI Feifei. Research on on-line detection of fabric defect marks and positioning of defective samples[D]. Zhengzhou: Zhongyuan University of Technology, 2015: 1-4.
[2] 王宁. 智能制造推动我国纺织品贸易比较优势的动态转换[J]. 对外经贸实务, 2021(11):36-41.
WANG Ning. Intelligent manufacturing promotes the dynamic transformation of my country's textile trade comparative advantages[J]. Practice in Foreign Economic Relations and Trade, 2021(11): 36-41.
[3] 张文昌, 单忠德, 卢影. 基于机器视觉的纱笼纱杆快速定位方法[J]. 纺织学报, 2020, 41(12):137-143.
ZHANG Wenchang, SHAN Zhongde, LU Ying. Rapid positioning method of sarong yarn rod based on machine vision[J]. Journal of Textile Research, 2020, 41(12):137-143.
[4] TU J, HAN S, SUN L, et al. The method of creel positioning based on monocular vision[J]. Sensors, 2022.DOI:10.3390/s22176657.
[5] 张建新, 李琦. 基于机器视觉的筒子纱密度在线检测系统[J]. 纺织学报, 2020, 41(6):141-146.
ZHANG Jianxin, LI Qi. On-line detection system for cheese density based on machine vision[J]. Journal of Textile Researcgh, 2020, 41(6): 141-146.
[6] SHI Z, SHI W, WANG J. The detection of thread roll's Margin based on computer vision[J]. Sensors, 2021. DOI: 10.3390/s21196331.
[7] 李靖宇, 沈丹峰, 王玉, 等. 基于参数可调均值迁移滤波实现筒子纱的识别检测[J]. 轻工机械, 2021, 39(3):63-68.
LI Jingyu, SHEN Danfeng, WANG Yu, et al. Recognition and detection of cheese yarn based on parameter adjustable mean shift filtering[J]. Light Industry Machinery, 2021, 39(3):63-68.
[8] 金守峰, 林强强, 马秋瑞. 基于对抗神经网络和神经网络模型的筒子纱抓取方法[J]. 毛纺科技, 2020, 48(1):79-84.
JIN Shoufeng, LIN Qiangqiang, MA Qiurui. Cheese grabbing method based on adversarial neural network and neural network model[J]. Wool Textile Journal, 2020, 48(1): 79-84.
[9] 林清平, 张麒麟, 肖蕾. 采用改进YOLOv5网络的遥感图像目标识别方法[J]. 空军预警学院学报, 2021, 35(2): 117-120.
LIN Qingping, ZHANG Qilin, XIAO Lei. Remote sensing image target recognition method using improved YOLOv5 network[J]. Journal of Air Force Early Warning Academy, 2021, 35(2): 117-120.
[10] HAN K, WANG Y, TIAN Q, et al. GhostNet: more features from cheap operations[C] // IEEE/CVF Conference on Computer Vision and Pattern Recogni-tion (CVPR). Seattle:IEEE, 2020: 1577-1586.
[11] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]// Proceedings of the European Conference on Computer Vision (ECCV). Berlin: Springer Press, 2018: 3-19.
[12] 丛玉华, 何啸, 朱惠娟, 等. 基于改进Yolov4-Tiny网络的安全帽监测系统[J]. 电子技术与软件工程, 2021(19): 121-124.
CONG Yuhua, HE Xiao, ZHU Huijuan, et al. Safety helmet monitoring system based on improved Yolov4-Tiny network[J]. Electronic Technology and Software Engineering, 2021(19): 121-124.
[13] 闫坤. 基于PSD与单目视觉的激光跟踪姿态测量方法研究[D]. 武汉: 湖北工业大学, 2020: 26-28.
YAN Kun. Research on attitude measurement method of laser tracking based on PSD and monocular vision[D]. Wuhan: Hubei University of Technology, 2020: 26-28.
[14] 宇文亮. 基于Kinect的物体模型建立及识别定位技术研究[D]. 哈尔滨: 哈尔滨工业大学, 2018: 14-16.
YU Wenliang. Research on object model establishment and recognition and positioning technology based on Kinect[D]. Harbin: Harbin Institute of Technology, 2018: 14-16.
[1] CHEN Gang, JIN Guiyang, WU Jing, LUO Qian. Research and development of key technologies and whole-set equipment for intelligent sewing [J]. Journal of Textile Research, 2023, 44(08): 197-204.
[2] CHEN Taifang, ZHOU Yaqin, WANG Junliang, XU Chuqiao, LI Dongwu. Online detection of yarn breakage based on visual feature enhancement and extraction [J]. Journal of Textile Research, 2023, 44(08): 63-72.
[3] JI Yue, PAN Dong, MA Jiedong, SONG Limei, DONG Jiuzhi. Yarn tension non-contacts detection system on string vibration based on machine vision [J]. Journal of Textile Research, 2023, 44(05): 198-204.
[4] TAO Jing, WANG Junliang, XU Chuqiao, ZHANG Jie. Feature extraction method for ring-spun-yarn evenness online detection based on visual calibration [J]. Journal of Textile Research, 2023, 44(04): 70-77.
[5] WANG Bin, LI Min, LEI Chenglin, HE Ruhan. Research progress in fabric defect detection based on deep learning [J]. Journal of Textile Research, 2023, 44(01): 219-227.
[6] CHEN Jinguang, LI Xue, SHAO Jingfeng, MA Lili. Lightweight clothing detection method based on an improved YOLOv5 network [J]. Journal of Textile Research, 2022, 43(10): 155-160.
[7] TU Jiajia, SUN Lei, MAO Huimin, DAI Ning, ZHU Wanzhen, SHI Weimin. Automatic bobbin changing technology for circular weft knitting machines [J]. Journal of Textile Research, 2022, 43(07): 178-185.
[8] JIN Shoufeng, HOU Yize, JIAO Hang, ZHNAG Peng, LI Yutao. An improved AlexNet model for fleece fabric quality inspection [J]. Journal of Textile Research, 2022, 43(06): 133-139.
[9] ZHOU Qihong, PENG Yi, CEN Junhao, ZHOU Shenhua, LI Shujia. Yarn breakage location for yarn joining robot based on machine vision [J]. Journal of Textile Research, 2022, 43(05): 163-169.
[10] LÜ Wentao, LIN Qiqi, ZHONG Jiaying, WANG Chengqun, XU Weiqiang. Research progress of image processing technology for fabric defect detection [J]. Journal of Textile Research, 2021, 42(11): 197-206.
[11] WU Liubo, LI Xinrong, DU Jinli. Research progress of motion trajectory planning of sewing robot based on contour extraction [J]. Journal of Textile Research, 2021, 42(04): 191-200.
[12] TIAN Yuhang, WANG Shaozong, ZHANG Wenchang, ZHANG Qian. Rapid detection method of single-component dye liquor concentration based on machine vision [J]. Journal of Textile Research, 2021, 42(03): 115-121.
[13] FENG Wenqian, LI Xinrong, YANG Shuai. Research progress in machine vision algorithm for human contour detection [J]. Journal of Textile Research, 2021, 42(03): 190-196.
[14] ZHU Shigen, YANG Hongxian, BAI Yunfeng, DING Hao, ZHU Qiaolian. Investigation on automatic deformation inspection system of long and thin parts with hooks [J]. Journal of Textile Research, 2020, 41(10): 158-163.
[15] ZHANG Jianxin, LI Qi. Online cheese package yarn density detection system based on machine vision [J]. Journal of Textile Research, 2020, 41(06): 141-146.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!