Journal of Textile Research ›› 2024, Vol. 45 ›› Issue (01): 194-202.doi: 10.13475/j.fzxb.20220706101

• Machinery & Equipment • Previous Articles     Next Articles

Model for empty bobbin recognition based on improved residual network

LU Weijian1, TU Jiajia1,2, WANG Junru1, HAN Sijie1, SHI Weimin1()   

  1. 1. Faculty of Mechanical Engineering and Automation, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China
    2. College of Automation, Zhejiang Institute of Mechanical and Electrical Engineering, Hangzhou, Zhejiang 310053, China
  • Received:2022-07-18 Revised:2023-03-20 Online:2024-01-15 Published:2024-03-14

Abstract:

Objective In the automatic production line of circular weft knitting robot, the use of conventional machine vision to identify empty yarn cylinders has low accuracy and a large number of model parameters in the process of automatic empty cylinder changing of the bobbin changing robot, due to the complex background of the textile workshop and the many types of yarn cylinders. In order to ensure the accurate identification of empty bobbin by bobbin changing robots, it is necessary to design an empty bobbin identification model with high accuracy and light weight.

Method Based on the ResNet-18 model, the convolution kernel was light weighted, the classical residual module was improved, the SENet attention mechanism was increased, and the detection accuracy of empty bobbin was improved. By simulating various interference factors at the production site, the training samples were increased, aiming to improve the robustness of the model and to make it more suitable for the actual production environment. The model before and after improvement was compared with other detection models.

Results The original dataset was adopted to study the influence of convolutional kernel lightweight design, attention mechanism and improved residual module on the model. Ablation experiments showed that the application of small convolution helped reduce the model parameters to a certain extent, the addition of attention mechanism improved the recognition accuracy by 3.86%, and the addition of the optimized residual structure not only improved the recognition accuracy by 1.22%, but also reduced the amount of model parameters by 650%. Under the same experimental conditions, the detection results were compared among the improved model, ResNe-18, VGG-16, and AlexNet network model. The accuracy of the improved model in the verification set is 99.6%, which is 4.46% higher than that of the ResNet-18 model, and 7.05%-9.41% higher than that of VGG-16 and AlexNet. Under the experimental conditions of the same training parameters and network structure, the training on the data-enhanced dataset was verified. Because the data enhancement could improve the diversification of the spindle data and effectively avoid the phenomenon of overfitting, the accuracy of the original model and the improved model were improved. The accuracy of the improved model was 0.43%-0.72% higher than that of the ResNet-18 model, and the accuracy of the improved model was less affected by the dataset, indicating better robustness of the model against interference from the surrounding conditins. The convergence speed of the improved model was higher than that of other recognition models, the ascent speed was smoother, and the training accuracy was the highest. Tthe reliability and effectiveness of the improved model for identifying empty bobbins are illustrated. The improved model was found far superior to the original model in extracting shallow and deep network features, and it could effectively reduce the loss of yarn feature information during convolution kernel and effectively improve the ability to identify yarn. The number of parameters of the improved model was reduced to about 1/10 of the number of parameters of the original model hence reducing the storage space. The work provides an idea for the application of the empty yarn cylinder identification system based on the residual network model to the edge deployment in textile workshops.

Conclusion On the basis of ResNet-18 network, the network is modified, combined with the lightweight of convolution kernel, the SENet attention mechanism and the improved residual module. The new model not only improves the result accuracy but also reduces the number of model parameters in identifying empty bobbins in complex environments. Compared with other recognition models, the improved model has better robustness against interference. The improved model has a small number of parameters, which provides an idea for deploying empty bobbin identification models for embedded devices.

Key words: textile workshop, empty bobbin recognition, residual network, model lightweighting, deep learning

CLC Number: 

  • TS106

Fig.1

Spindle inspection process"

Fig.2

Classical residual structure. (a) Classic residual structure 1; (b) Classic residual structure 2"

Fig.3

Examples of empty and non-empty bobbin sample. (a) Conical empty yarn cylinder; (b) White cylindrical empty yarn cylinder; (c) White conical non-empty yarn cylinder; (d) Black cylindrical non-empty yarn cylinder"

Fig.4

Sample collection platform"

Fig.5

Data enhancement effect of different situations in simuated production site. (a) Random flip;(b) Image filtering;(c) Brightness enhancement"

Fig.6

Overall network structure"

Fig.7

Lightweight residual structure. (a) Lightweight residual module 1; (b) Lightweight residual module 2"

Fig.8

SENet network structure"

Fig.9

Improved residual module. (a) Improved residual module 1;(b) Improved residual module 2"

Tab.1

ResNet-18 design and performance comparison"

方案
编号
卷积核设计 注意力机制 轻量化残差结构 准确率/% 模型训练
参数量/106
大卷积核 小卷积核 应用 不应用 应用 不应用 训练集 验证集
1 - - - 98.55 95.14 181.75
2 - - - 98.22 91.90 99.91
3 - - - 99.40 99.00 181.93
4 - - - 99.28 96.95 27.78
5 - - - 99.40 98.38 102.73
6 - - - 99.70 99.60 18.49

Tab.2

Recognition accuracy of different models"

模型 准确率/% 参数量/106
训练集 验证集
AlexNet 96.45 90.19 30.95
VGG-16 96.85 92.95 1 548.43
ResNet-18 98.55 95.14 181.75
改进模型 99.70 99.60 18.49

Tab.3

Comparison of recognition effects between algorithm in this paper and original model"

模型 数据集 准确率/% 参数量/106
原始数据集 数据集1 数据集2 数据集3
ResNet-18 训练集 98.55 99.71 99.20 99.75 181.75
测试集 95.14 99.26 98.97 99.21
改进模型 训练集 99.7 99.89 99.70 100 13.49
测试集 99.6 99.75 99.40 99.93

Fig.10

Comparison of recognition accuracy of different models"

Fig.11

Comparison of recognition effect of spinning cylinder of four models. (a) Improved models;(b) ResNet-18 model; (c) VGG-16 model; (d) AlexNet model"

Fig.12

Visual comparison of feature extraction from different models. (a) Original module; (b) Improved module"

[1] 阮建青, 赵吕航, 赵祚翔, 等. 成本上升对中国劳动密集型产业的影响:基于宁波纺织服装产业集群的研究[J]. 浙江大学学报(人文社会科学版), 2021, 51 (6):119-133.
RUAN Jianqing, ZHAO Lvhang, ZHAO Zuoxiang, et al. The impact of rising costs on China's labor-intensive industries: a study based on Ningbo's textile and garment industry cluster[J]. Journal of Zhejiang University Science(Humanities and Social Edition), 2021, 51(6):119-133.
[2] 张文昌, 单忠德, 卢影. 基于机器视觉的纱笼纱杆快速定位方法[J]. 纺报学报, 2020, 41(12):137-143.
ZHANG Wenchang, SHAN Zhongde, LU Ying. Rapid positioning method of sarong yarn rod based on machine vision[J]. Journal of Textile Research, 2020, 41(12): 137-143.
[3] 任慧娟, 金守峰, 顾金芋. 基于颜色特征的筒纱分拣机器人识别定位方法[J]. 轻工机械, 2020, 38(4):58-63.
REN Huijuan, JIN Shoufeng, GU Jinqian. Identification and positioning method of twig sorting robot based on color characteristics[J]. Light Industrial Machinery, 2020, 38(4):58-63.
[4] 王倪奕棋, 管声启, 管宇灿, 等. 基于改进的SSD深度学习算法的双目视觉纱筒识别定位[J]. 纺织高校基础科学学报, 2021, 34(2):59-66.
WANG Niyiqi, GUAN Shengqi, GUAN Yucan, et al. Binocular vision spindle recognition and positioning based on improved SSD deeplearning algorithm[J]. Journal of Basic Science of Textile Universities, 2021, 34(2):59-66.
[5] SHI Zhiwei, SHI Weimin, WANG Junru. The detection of thread roll's margin based on computer vision[J]. Sensors, 2021.DOI:10.3390/s21196331.
[6] 陈峥, 毕晓君. 基于轻量级神经网络的单幅图像去雨滴模型[J]. 哈尔滨工程大学学报, 2023, 44(2):292-299.
CHEN Zheng, BI Xiaojun. Single-image raindrop model based on lightweight neural network[J]. Journal of Harbin Engineering University, 2023, 44(2):292-299.
[7] HOWARD A G, ZHU M, CHEN B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[DB/OL].(2017-04-17).arXiv:1704.04861v1. https://doi.org/10.48550/arXiv.1704.04861.
[8] HOWARD A G, SANDLER M, CHU G, et al. Searching for mobilenetv3[C] // Proceedings of 2019 IEEE/ CVF International Conference on Computer Vision ( ICCV ). Seoul: IEEE, 2019: 1314-1324.
[9] XIE Saining, GIRSHICK R, DOLLAR P, et al. Aggregated residual transformations for deep neural networks[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 5987-5995.
[10] HE K M, ZHANG X Y, REN S Q, et al. Deep residuallearning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. LasVegas: IEEE, 2016: 770-778.
[11] 张倩, 杨颖, 刘刚, 等. 融合数据增强与改进ResNet34的奶牛热红外图像乳腺炎检测[J]. 光谱学与光谱分析, 2023, 43(1):280-288.
ZHANG Qian, YANG Ying, LIU Gang, et al. Thermal infrared image mastitis detection in dairy cows with ResNet34 enhanced and improved by ResNet34[J]. Spectroscopy and Spectral Analysis, 2023, 43(1):280-288.
[12] 周啸辉, 余磊, 何茜, 等. 基于改进ResNet-18的红外图像人体行为识别方法研究[J]. 激光与红外, 2021, 51(9):1178-1184.
ZHOU Xiaohui, YU Lei, HE Qian, et al. Research on human behavior recognition in infrared image based on improved ResNet-18[J]. Laser and Infrared, 2021, 51(9): 1178-1184.
[13] AHN N, KANG B, SOHN K A. Fast, accurate, and light weight super-resolution with cascading residual network[C]// European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 256-272.
[14] GUO Yunhui, LI Yandong, WANG Liqiang, et al. Depthwise convolution is all you need for learning multiple visual domains[J]. Proceedings of the AAAI Conference on Artificial Intelligence. Hawaill: AAAI, 2019: 8368-8375.
[15] HU Jie, SHEN Li, SUN Gang, et al. Squeeze-and-excitation networks[C]// Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018:7132-7141.
[16] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
doi: 10.1145/3065386
[17] KAREN Simonyan, ANDREW Zisserman. Very deep convolutional networks for large-scale image recogn-ition[J]. ArXiv, 2014. DOI: 10.48550/ARVIV.1409.1556.
[1] CHI Panpan, MEI Chennan, WANG Yan, XIAO Hong, ZHONG Yueqi. Single soldier camouflage small target detection based on boundary-filling [J]. Journal of Textile Research, 2024, 45(01): 112-119.
[2] HUANG Yueyue, CHEN Xiao, WANG Haiyan, YAO Haiyang. Human clothing color recognition based on ClothResNet model [J]. Journal of Textile Research, 2023, 44(10): 143-148.
[3] GAO Yihua, QIAN Fuping, WANG Xiaowei, WANG Huming, GAO Jie, LU Biao, HAN Yunlong. Structural design and air supply effect of directional uniform flow inlet in textile workshop [J]. Journal of Textile Research, 2023, 44(08): 189-196.
[4] YANG Hongmai, ZHANG Xiaodong, YAN Ning, ZHU Linlin, LI Na'na. Robustness algorithm for online yarn breakage detection in warp knitting machines [J]. Journal of Textile Research, 2023, 44(05): 139-146.
[5] GU Bingfei, ZHANG Jian, XU Kaiyi, ZHAO Songling, YE Fan, HOU Jue. Human contour and parameter extraction from complex background [J]. Journal of Textile Research, 2023, 44(03): 168-175.
[6] LI Yang, PENG Laihu, LI Jianqiang, LIU Jianting, ZHENG Qiuyang, HU Xudong. Fabric defect detection based on deep-belief network [J]. Journal of Textile Research, 2023, 44(02): 143-150.
[7] CHEN Jia, YANG Congcong, LIU Junping, HE Ruhan, LIANG Jinxing. Cross-domain generation for transferring hand-drawn sketches to garment images [J]. Journal of Textile Research, 2023, 44(01): 171-178.
[8] WANG Bin, LI Min, LEI Chenglin, HE Ruhan. Research progress in fabric defect detection based on deep learning [J]. Journal of Textile Research, 2023, 44(01): 219-227.
[9] AN Yijin, XUE Wenliang, DING Yi, ZHANG Shunlian. Evaluation of textile color rubbing fastness based on image processing [J]. Journal of Textile Research, 2022, 43(12): 131-137.
[10] CHEN Jinguang, LI Xue, SHAO Jingfeng, MA Lili. Lightweight clothing detection method based on an improved YOLOv5 network [J]. Journal of Textile Research, 2022, 43(10): 155-160.
[11] JIANG Hui, MA Biao. Style similarity algorithm based on clothing style [J]. Journal of Textile Research, 2021, 42(11): 129-136.
[12] YANG Zhengyan, XUE Wenliang, ZHANG Chuanxiong, DING Yi, MA Yanxue. Recommendations for user's bottoms matching based on generative adversarial networks [J]. Journal of Textile Research, 2021, 42(07): 164-168.
[13] LI Xun, NAN Kaikai, ZHAO Zhengfan, WANG Xiaohua, JING Junfeng. Task allocation of handling robot in textile workshop based on multi-agent game theory [J]. Journal of Textile Research, 2020, 41(07): 78-87.
[14] XU Qian, CHEN Minzhi. Garment grain balance evaluation system based on deep learning [J]. Journal of Textile Research, 2019, 40(10): 191-195.
[15] LIU Zhengdong, LIU Yihan, WANG Shouren. Depth learning method for suit detection in images [J]. Journal of Textile Research, 2019, 40(04): 158-164.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!