纺织学报 ›› 2024, Vol. 45 ›› Issue (01): 194-202.doi: 10.13475/j.fzxb.20220706101

• 机械与设备 • 上一篇    下一篇

基于改进残差网络的空纱筒识别模型

陆伟健1, 屠佳佳1,2, 王俊茹1, 韩思捷1, 史伟民1()   

  1. 1.浙江理工大学 机械工程学院, 浙江 杭州 310018
    2.浙江机电职业技术学院 自动化学院, 浙江 杭州 310053
  • 收稿日期:2022-07-18 修回日期:2023-03-20 出版日期:2024-01-15 发布日期:2024-03-14
  • 通讯作者: 史伟民(1965—),男,教授,博士。主要研究方向为纺织装备机电控制技术研究。E-mail: swm@zstu.edu.cn
  • 作者简介:陆伟健(1997—),男,硕士生。主要研究方向为计算机图像处理及机器视觉。
  • 基金资助:
    国家重点研发计划资助项目(2017YFB1304000)

Model for empty bobbin recognition based on improved residual network

LU Weijian1, TU Jiajia1,2, WANG Junru1, HAN Sijie1, SHI Weimin1()   

  1. 1. Faculty of Mechanical Engineering and Automation, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China
    2. College of Automation, Zhejiang Institute of Mechanical and Electrical Engineering, Hangzhou, Zhejiang 310053, China
  • Received:2022-07-18 Revised:2023-03-20 Published:2024-01-15 Online:2024-03-14

摘要:

针对纺织车间背景复杂、纱筒种类多导致利用传统机器视觉识别空纱筒准确率低、模型参数量大的问题,设计了一种基于改进残差网络的空纱筒识别模型。该模型借鉴ResNet系列的模型结构,进行卷积核轻量化,改进经典的残差模块并加入SENet注意力机制,以达到提高检测空纱筒的准确率,减少模型参数的目的。最后通过数据增强,创建了适合工厂实际生产的纱筒数据集。实验结果表明:在消融实验中,应用SENet注意力机制可以提高3.86%的准确率,利用优化残差模块不仅减少了650%的模型参数还提高了1.22%的准确率。在原数据集的验证集上,改进模型的准确率为99.6%比ResNet-18模型高4.46%,与VGG-16和AlexNet相比提高了7.05%~9.41%。在增强的数据集上,识别模型的准确率都有了较大的提升,但改进模型的准确率变化不大,说明该模型的鲁棒性较好,不易受到样本不足的影响。改进模型的参数数量缩小到原模型参数数量的1/10左右,为嵌入式设备部署空纱筒识别模型提供了思路。

关键词: 纺织车间, 空纱筒识别, 残差网络, 模型轻量化, 深度学习

Abstract:

Objective In the automatic production line of circular weft knitting robot, the use of conventional machine vision to identify empty yarn cylinders has low accuracy and a large number of model parameters in the process of automatic empty cylinder changing of the bobbin changing robot, due to the complex background of the textile workshop and the many types of yarn cylinders. In order to ensure the accurate identification of empty bobbin by bobbin changing robots, it is necessary to design an empty bobbin identification model with high accuracy and light weight.

Method Based on the ResNet-18 model, the convolution kernel was light weighted, the classical residual module was improved, the SENet attention mechanism was increased, and the detection accuracy of empty bobbin was improved. By simulating various interference factors at the production site, the training samples were increased, aiming to improve the robustness of the model and to make it more suitable for the actual production environment. The model before and after improvement was compared with other detection models.

Results The original dataset was adopted to study the influence of convolutional kernel lightweight design, attention mechanism and improved residual module on the model. Ablation experiments showed that the application of small convolution helped reduce the model parameters to a certain extent, the addition of attention mechanism improved the recognition accuracy by 3.86%, and the addition of the optimized residual structure not only improved the recognition accuracy by 1.22%, but also reduced the amount of model parameters by 650%. Under the same experimental conditions, the detection results were compared among the improved model, ResNe-18, VGG-16, and AlexNet network model. The accuracy of the improved model in the verification set is 99.6%, which is 4.46% higher than that of the ResNet-18 model, and 7.05%-9.41% higher than that of VGG-16 and AlexNet. Under the experimental conditions of the same training parameters and network structure, the training on the data-enhanced dataset was verified. Because the data enhancement could improve the diversification of the spindle data and effectively avoid the phenomenon of overfitting, the accuracy of the original model and the improved model were improved. The accuracy of the improved model was 0.43%-0.72% higher than that of the ResNet-18 model, and the accuracy of the improved model was less affected by the dataset, indicating better robustness of the model against interference from the surrounding conditins. The convergence speed of the improved model was higher than that of other recognition models, the ascent speed was smoother, and the training accuracy was the highest. Tthe reliability and effectiveness of the improved model for identifying empty bobbins are illustrated. The improved model was found far superior to the original model in extracting shallow and deep network features, and it could effectively reduce the loss of yarn feature information during convolution kernel and effectively improve the ability to identify yarn. The number of parameters of the improved model was reduced to about 1/10 of the number of parameters of the original model hence reducing the storage space. The work provides an idea for the application of the empty yarn cylinder identification system based on the residual network model to the edge deployment in textile workshops.

Conclusion On the basis of ResNet-18 network, the network is modified, combined with the lightweight of convolution kernel, the SENet attention mechanism and the improved residual module. The new model not only improves the result accuracy but also reduces the number of model parameters in identifying empty bobbins in complex environments. Compared with other recognition models, the improved model has better robustness against interference. The improved model has a small number of parameters, which provides an idea for deploying empty bobbin identification models for embedded devices.

Key words: textile workshop, empty bobbin recognition, residual network, model lightweighting, deep learning

中图分类号: 

  • TS106

图1

纱筒检测流程"

图2

经典的残差结构"

图3

空纱筒与非空纱筒样本示例"

图4

样本采集平台"

图5

模拟生产现场不同情况的数据增强效果 注:样本1~4为随机选择的数据。"

图6

整体网络结构"

图7

轻量化残差结构"

图8

SENet网络结构"

图9

改进后的残差模块"

表1

ResNet-18设计方案与性能对比"

方案
编号
卷积核设计 注意力机制 轻量化残差结构 准确率/% 模型训练
参数量/106
大卷积核 小卷积核 应用 不应用 应用 不应用 训练集 验证集
1 - - - 98.55 95.14 181.75
2 - - - 98.22 91.90 99.91
3 - - - 99.40 99.00 181.93
4 - - - 99.28 96.95 27.78
5 - - - 99.40 98.38 102.73
6 - - - 99.70 99.60 18.49

表2

不同模型识别准确率"

模型 准确率/% 参数量/106
训练集 验证集
AlexNet 96.45 90.19 30.95
VGG-16 96.85 92.95 1 548.43
ResNet-18 98.55 95.14 181.75
改进模型 99.70 99.60 18.49

表3

本文算法与原模型的识别效果对比"

模型 数据集 准确率/% 参数量/106
原始数据集 数据集1 数据集2 数据集3
ResNet-18 训练集 98.55 99.71 99.20 99.75 181.75
测试集 95.14 99.26 98.97 99.21
改进模型 训练集 99.7 99.89 99.70 100 13.49
测试集 99.6 99.75 99.40 99.93

图10

不同模型的识别准确率对比"

图11

4种模型的纱筒识别效果对比 注:Class表示纱筒类别;pos表示非空纱筒;neg表示空纱筒;prob表示准确率。"

图12

不同模型特征提取可视化对比"

[1] 阮建青, 赵吕航, 赵祚翔, 等. 成本上升对中国劳动密集型产业的影响:基于宁波纺织服装产业集群的研究[J]. 浙江大学学报(人文社会科学版), 2021, 51 (6):119-133.
RUAN Jianqing, ZHAO Lvhang, ZHAO Zuoxiang, et al. The impact of rising costs on China's labor-intensive industries: a study based on Ningbo's textile and garment industry cluster[J]. Journal of Zhejiang University Science(Humanities and Social Edition), 2021, 51(6):119-133.
[2] 张文昌, 单忠德, 卢影. 基于机器视觉的纱笼纱杆快速定位方法[J]. 纺报学报, 2020, 41(12):137-143.
ZHANG Wenchang, SHAN Zhongde, LU Ying. Rapid positioning method of sarong yarn rod based on machine vision[J]. Journal of Textile Research, 2020, 41(12): 137-143.
[3] 任慧娟, 金守峰, 顾金芋. 基于颜色特征的筒纱分拣机器人识别定位方法[J]. 轻工机械, 2020, 38(4):58-63.
REN Huijuan, JIN Shoufeng, GU Jinqian. Identification and positioning method of twig sorting robot based on color characteristics[J]. Light Industrial Machinery, 2020, 38(4):58-63.
[4] 王倪奕棋, 管声启, 管宇灿, 等. 基于改进的SSD深度学习算法的双目视觉纱筒识别定位[J]. 纺织高校基础科学学报, 2021, 34(2):59-66.
WANG Niyiqi, GUAN Shengqi, GUAN Yucan, et al. Binocular vision spindle recognition and positioning based on improved SSD deeplearning algorithm[J]. Journal of Basic Science of Textile Universities, 2021, 34(2):59-66.
[5] SHI Zhiwei, SHI Weimin, WANG Junru. The detection of thread roll's margin based on computer vision[J]. Sensors, 2021.DOI:10.3390/s21196331.
[6] 陈峥, 毕晓君. 基于轻量级神经网络的单幅图像去雨滴模型[J]. 哈尔滨工程大学学报, 2023, 44(2):292-299.
CHEN Zheng, BI Xiaojun. Single-image raindrop model based on lightweight neural network[J]. Journal of Harbin Engineering University, 2023, 44(2):292-299.
[7] HOWARD A G, ZHU M, CHEN B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[DB/OL].(2017-04-17).arXiv:1704.04861v1. https://doi.org/10.48550/arXiv.1704.04861.
[8] HOWARD A G, SANDLER M, CHU G, et al. Searching for mobilenetv3[C] // Proceedings of 2019 IEEE/ CVF International Conference on Computer Vision ( ICCV ). Seoul: IEEE, 2019: 1314-1324.
[9] XIE Saining, GIRSHICK R, DOLLAR P, et al. Aggregated residual transformations for deep neural networks[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 5987-5995.
[10] HE K M, ZHANG X Y, REN S Q, et al. Deep residuallearning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. LasVegas: IEEE, 2016: 770-778.
[11] 张倩, 杨颖, 刘刚, 等. 融合数据增强与改进ResNet34的奶牛热红外图像乳腺炎检测[J]. 光谱学与光谱分析, 2023, 43(1):280-288.
ZHANG Qian, YANG Ying, LIU Gang, et al. Thermal infrared image mastitis detection in dairy cows with ResNet34 enhanced and improved by ResNet34[J]. Spectroscopy and Spectral Analysis, 2023, 43(1):280-288.
[12] 周啸辉, 余磊, 何茜, 等. 基于改进ResNet-18的红外图像人体行为识别方法研究[J]. 激光与红外, 2021, 51(9):1178-1184.
ZHOU Xiaohui, YU Lei, HE Qian, et al. Research on human behavior recognition in infrared image based on improved ResNet-18[J]. Laser and Infrared, 2021, 51(9): 1178-1184.
[13] AHN N, KANG B, SOHN K A. Fast, accurate, and light weight super-resolution with cascading residual network[C]// European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 256-272.
[14] GUO Yunhui, LI Yandong, WANG Liqiang, et al. Depthwise convolution is all you need for learning multiple visual domains[J]. Proceedings of the AAAI Conference on Artificial Intelligence. Hawaill: AAAI, 2019: 8368-8375.
[15] HU Jie, SHEN Li, SUN Gang, et al. Squeeze-and-excitation networks[C]// Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018:7132-7141.
[16] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
doi: 10.1145/3065386
[17] KAREN Simonyan, ANDREW Zisserman. Very deep convolutional networks for large-scale image recogn-ition[J]. ArXiv, 2014. DOI: 10.48550/ARVIV.1409.1556.
[1] 池盼盼, 梅琛楠, 王焰, 肖红, 钟跃崎. 基于边缘填充的单兵迷彩伪装小目标检测[J]. 纺织学报, 2024, 45(01): 112-119.
[2] 黄玥玥, 陈晓, 王海燕, 姚海洋. 基于ClothResNet模型的人体衣物颜色识别[J]. 纺织学报, 2023, 44(10): 143-148.
[3] 高艺华, 钱付平, 王晓维, 汪虎明, 高杰, 陆彪, 韩云龙. 纺织车间定向均流送风口结构设计及其送风性能[J]. 纺织学报, 2023, 44(08): 189-196.
[4] 杨宏脉, 张效栋, 闫宁, 朱琳琳, 李娜娜. 一种高鲁棒性经编机上断纱在线检测算法[J]. 纺织学报, 2023, 44(05): 139-146.
[5] 顾冰菲, 张健, 徐凯忆, 赵崧灵, 叶凡, 侯珏. 复杂背景下人体轮廓及其参数提取[J]. 纺织学报, 2023, 44(03): 168-175.
[6] 李杨, 彭来湖, 李建强, 刘建廷, 郑秋扬, 胡旭东. 基于深度信念网络的织物疵点检测[J]. 纺织学报, 2023, 44(02): 143-150.
[7] 陈佳, 杨聪聪, 刘军平, 何儒汉, 梁金星. 手绘草图到服装图像的跨域生成[J]. 纺织学报, 2023, 44(01): 171-178.
[8] 王斌, 李敏, 雷承霖, 何儒汉. 基于深度学习的织物疵点检测研究进展[J]. 纺织学报, 2023, 44(01): 219-227.
[9] 安亦锦, 薛文良, 丁亦, 张顺连. 基于图像处理的纺织品耐摩擦色牢度评级[J]. 纺织学报, 2022, 43(12): 131-137.
[10] 陈金广, 李雪, 邵景峰, 马丽丽. 改进YOLOv5网络的轻量级服装目标检测方法[J]. 纺织学报, 2022, 43(10): 155-160.
[11] 江慧, 马彪. 基于服装风格的款式相似度算法[J]. 纺织学报, 2021, 42(11): 129-136.
[12] 杨争妍, 薛文良, 张传雄, 丁亦, 马颜雪. 基于生成式对抗网络的用户下装搭配推荐[J]. 纺织学报, 2021, 42(07): 164-168.
[13] 李珣, 南恺恺, 赵征凡, 王晓华, 景军锋. 多智能体博弈的纺织车间搬运机器人任务分配[J]. 纺织学报, 2020, 41(07): 78-87.
[14] 许倩, 陈敏之. 基于深度学习的服装丝缕平衡性评价系统[J]. 纺织学报, 2019, 40(10): 191-195.
[15] 刘正东, 刘以涵, 王首人. 西装识别的深度学习方法[J]. 纺织学报, 2019, 40(04): 158-164.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!