纺织学报 ›› 2025, Vol. 46 ›› Issue (10): 227-236.doi: 10.13475/j.fzxb.20241203901

• 机械与设备 • 上一篇    下一篇

基于高泛化性图像生成与分类算法的织物瑕疵检测系统设计

吴威涛1,2, 韩奥博1, 牛奎3, 贾建辉3, 尹邦雄1, 向忠1()   

  1. 1.浙江理工大学 机械工程学院, 浙江 杭州 310018
    2.浙江理工大学新昌技术创新研究院, 浙江 绍兴 312000
    3.浙江理工大学 信息科学与工程学院, 浙江 杭州 310018
  • 收稿日期:2024-12-18 修回日期:2025-03-06 出版日期:2025-10-15 发布日期:2025-10-15
  • 通讯作者: 向忠(1982—),男,教授,博士。主要研究方向为机电装备智能化、信息化、绿色化、标准化。E-mail:xz@zstu.edu.cn
  • 作者简介:吴威涛(1994—),男,讲师,博士。主要研究方向为机器视觉、人工智能、智能装备机电一体化设计。
  • 基金资助:
    国家自然科学基金项目(51605443);中央引导地方科技发展资金项目(2023ZY1029);浙江省“尖兵”“领雁”计划项目(2023YFB3210901);杭州市重大科技创新项目(2022AIZD0153);新昌县科技计划项目(TM2023011)

Design of fabric defect detection system based on high generalization image generation and classification algorithm

WU Weitao1,2, HAN Aobo1, NIU Kui3, JIA Jianhui3, YIN Bangxiong1, XIANG Zhong1()   

  1. 1. Faculty of Mechanical Engineering and Automation, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China
    2. Zhejiang Sci-Tech University Xinchang Technology Innovation Research Institute, Shaoxing, Zhejiang 312000, China
    3. School of Information Science and Engineering, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China
  • Received:2024-12-18 Revised:2025-03-06 Published:2025-10-15 Online:2025-10-15

摘要:

为提高验布环节瑕疵检测的准确性,利用人工智能生成瑕疵图像平衡异类瑕疵样本,通过多层次特征聚合手段提取同名瑕疵共性形态特征,以边缘计算机作为硬件平台,使用工业相机实时采集瑕疵图像,对后台数据进行数学建模综合处理;并在此基础上设计了一套涵盖硬件、采样、训练、检测、云端等环节的织物瑕疵检测全流程方案。设计瑕疵掩码生成器,实现基于人工智能的高拟真性瑕疵图像批量生成,解决异类瑕疵不均衡问题;提出钩形特征金字塔等网络模块,实现差异性瑕疵特征的精准提取,解决瑕疵分类泛化性差的难题;所设计的系统能够利用TensorRT框架优化加速,二阶段判定算法实现高速检测,并建立了瑕疵数据云分析等模块。经过实验与产业化应用测试,检测系统检出率为93.88%,同等质量要求下,所开发系统在节省至少50%人力的同时对提高工厂效率、减少经济成本具有重要意义。

关键词: 纺织品质量, 织物瑕疵检测, 机器视觉, 数字化升级, 智能制造

Abstract:

Objective Rapid, real-time and accurate detection of fabric defects is an important step in the textile production process. Manual detection has problems such as low detection efficiency, high labor intensity, high detection rate, and low detection rate and poor real-time performance of existing mainstream target detection algorithms. Therefore, a fabric defect detection scheme covering hardware, sampling, training, detection, cloud and other links is proposed aiming at the development of a defect detection system based on machine vision to meet the requirements of high precision and real-time in practical applications.

Method A mask technology based fabric defect image generation algorithm is proposed. Based on CycleGAN network model, heterogeneous defect images are generated to solve the problem of unbalanced defect classes. Three network modules, including hook feature pyramid, are proposed. Starting with feature extraction network and information fusion module, the accurate extraction of differential defect features is realized based on YOLOv5 network model, which solves the problem of poor generalization of defect classification by previous target detection algorithms.The designed system can use TensorRT framework to optimize and accelerate, two-stage decision algorithm to realize high-speed detection, and build the defective data cloud analysis module.

Results The image size of the training dataset is 2 880×1 620. In the training phase, the network parameters are trained using a composite data set composed of defect original and defect clipping area images. The dataset has a total of 48 categories and 40 663 images, all of which use Labelim to mark the location of defects in the images, add labels, and finally generate a label file in PASCAL VOC format. The data set is divided into a training set, a validation set, and a test set according to the desired ratio of 8∶1∶1. The improved YOLOv5 model was used for training, the image size was adjusted to 640×640, the batch sample size was 32, and a total of 500 iterations were carried out. Training and testing was performed on a server equipped with Intel Core (TM) I9-1 2900HX, eight 24 GB GPU GeForce RTX4090 graphics cards, and 128 GB RAM. The experimental results show that both sample enhancement and improved detection network can improve the detection accuracy. Compared with the mainstream target detection network, the detection accuracy is up with 8 percentage points, and the detection rate is 95.1%. In the factory test, the detection rate is 93.88%, which meets the quality requirements of the factory.

Conclusion Artificial intelligence is used to generate defect images to solve the problem of uneven distribution of heterogeneous defect samples. The common morphological features of the same type defects are extracted by multistage feature aggregation to solve the problem of low detection rate and poor generalization due to the different morphology of the same name defects. A defect mask generator is designed to enable AI-based batch generation of highly realistic defect images, solving the issue of heterogeneous defect imbalance; network modules such as the hook-shaped feature pyramid are proposed to achieve precise extraction of differentiated defect features, addressing the challenge of poor generalization in defect classification. On this basis, a whole process scheme of fabric defect detection covering hardware, sampling, training, detection, cloud and other links is designed. Edge computer is used as the hardware platform, industrial cameras are used to collect defect images in real time, and the background data is comprehensively processed by mathematical modeling. This not only solves the problems of low efficiency, high labor intensity and high missed detection rate of manual fabric inspection.It also realizes the classified storage, management, traceability and analysis of data. Under the same quality requirements, the developed system can save at least 50% of the labor force, while improving the efficiency of the factory and reducing the economic cost.

Key words: textile quality, fabric defect detection, machine vision, digital upgrade, intelligent manufacturing

中图分类号: 

  • TP391.4

图1

系统建模及整机设备图"

表1

相机参数"

参数名称 数值
分辨率 2 880像素×1 620像素
最大帧率 37.4帧/s
曝光时间 20 μs
焦距 (16±0.8)mm
视场角 36.8°×29.88°×22.6°
靶面尺寸 1.69 cm
光圈范围 F1.8~F14

图2

硬件设备架构图"

图3

加入掩码的CycleGAN网络结构图"

图4

瑕疵外观及软件算法模块结构图"

图5

“智能-云”协同优化模型架构图"

图6

人机交互界面示意图"

图7

检测算法流程图"

表2

各网络模型生成织物瑕疵图像的FID分数"

类型 CycleGAN[9] StyleGAN2[10] 本文方法
断经 99.25 96.69 79.35
断纬 134.86 102.87 95.24
棉球 45.87 28.97 26.49
双纬 122.98 112.39 92.38
破洞 42.37 24.87 23.57
纬缩 69.48 67.54 60.39
错综 84.58 94.38 82.39
筘路 112.56 96.76 83.47
密路 86.95 65.84 65.25
稀路 74.28 62.37 46.37
平均 87.31 75.26 65.49

图8

采集图像与生成图像对比图"

表3

各网络模型检测准度数据"

方法 EmAP/%
未加入生成图像
数据集
加入生成图像
数据集
SSD[12] 61.4 64.7
FasterRCNN[14] 61.5 63.9
YoloV5n[5] 85.3 87.8
YoloV7tiny[11] 84.5 86.7
YoloV8n[13] 91.8 93.4
本文算法 93.3 95.1

表4

各模块的性能比较"

方法 EmAP/% 帧率/
(帧·s-1)
YoloV5(Baseline) 85.3 30.2
YoloV5+PDAM 89.8 27.7
YoloV5+HookFPN 86.5 35.1
YoloV5+MAFV 88.7 28.6
YoloV5+PDAM+HookFPN 91.6 30.3
YoloV5+PDAM+HookFPN+MAFV 93.3 29.5

表5

系统与工人检测数据对比"

织物编号 检测方式 准确度/% 检出率/% 同步率/%
150624A 工人 100
系统 100 207.69 207.69
190921A 工人 100
系统 92.86 216.67 205.71
170627A 工人 100
系统 95.4 244.12 218.42

表6

未引入GAN生成图像的系统与工人检测数据对比"

织物编号 检测方式 准确度/% 检出率/% 同步率/%
z180923A 工人 100
系统 70 87.5 62.5
z241220A 工人 100
系统 60.61 83.33 45.83
z240834A 工人 100
系统 70 116.67 50

表7

真实工况系统极限性能表现"

织物品名 实验对象 准确度/% 检出率/%
231216A-2 检测系统 93.54 93.34
240513A 检测系统 90.90 93.86
220522A 检测系统 95.83 93.88
[1] 郑小虎, 刘正好, 陈峰, 等. 纺织工业智能发展现状与展望[J]. 纺织学报, 2023, 44(8): 205-216.
ZHENG Xiaohu, LIU Zhenghao, CHEN Feng, et al. Current status and prospect of intelligent development in textile industry[J]. Journal of Textile Research, 2023, 44(8): 205-216.
[2] 乌婧, 江振林, 吉鹏, 等. 纺织品前瞻性制备技术及应用研究现状与发展趋势[J]. 纺织学报, 2023, 44(1): 1-10.
WU Jing, JIANG Zhenlin, JI Peng, et al. Research status and development trend of perspective preparation technologies and applications for textiles[J]. Journal of Textile Research, 2023, 44(1): 1-10.
doi: 10.1177/004051757404400101
[3] LUO X, NI Q, TAO R, et al. A lightweight detector based on attention mechanism for fabric defect detection[J]. IEEE Access, 2023, 11: 33554-33569.
doi: 10.1109/ACCESS.2023.3264262
[4] LIN G J, LIU K Y, XIA X K, et al. An efficient and intelligent detection method for fabric defects based on improved YOLOv5[J]. Sensors, 2023, 23(1): 97.
doi: 10.3390/s23010097
[5] JOCHER G, CHAURASIA A, STOKEN A, et al. Ultralytics/yolov5: v6. 1-TensorRT, TensorFlow edge TPU and OpenVINO export and inference[CP/OL]. [2022-02-22]. https://github.com/ultralytics/yolov5.
[6] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]// 2017 IEEE International Conference on Computer Vision (ICCV). New York: IEEE, 2017: 2242-2251.
[7] XIANG Z, SHEN Y J, MA M, et al. HookNet: efficient multiscale context aggregation for high-accuracy detection of fabric defects[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 5016311.
[8] ZHOU K L, JIA J H, WU W T, et al. Space-depth mutual compensation for fine-grained fabric defect detection model[J]. Applied Soft Computing, 2025, 172: 112869.
doi: 10.1016/j.asoc.2025.112869
[9] KARRAS T, LAINE S, AILA T M. A style-based generator architecture for generative adversarial networks[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2019: 4396-4405.
[10] KARRAS T, LAINE S, AITTALA M, et al. Analyzing and improving the image quality of StyleGAN[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2020: 8107-8116.
[11] WANG C Y, BOCHKOVSKIY A, LIAO H M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recogni-tion (CVPR). New York: IEEE, 2023: 7464-7475.
[12] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot MultiBox detector[C]// Computer Vision-ECCV 2016. Cham: Springer International Publishing, 2016: 21-37.
[13] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
doi: 10.1109/TPAMI.2016.2577031 pmid: 27295650
[1] 任志墨, 张文昌, 李贞逸, 叶贺, 杨春柳, 张倩. 基于双目结构光的纺织圆柱件三维视觉定位方法[J]. 纺织学报, 2025, 46(07): 227-235.
[2] 许纶有, 邹鲲, 吴浩男. 基于机器视觉的浆纱机经轴区断纱故障检测[J]. 纺织学报, 2025, 46(06): 231-239.
[3] 李吉国, 景军锋, 程为, 王永波, 刘薇. 基于机器视觉的玻璃纤维纱团外观缺陷检测系统设计[J]. 纺织学报, 2025, 46(05): 243-251.
[4] 任柯, 周衡书, 魏瑾瑜, 闫文君, 左言文. 基于机器视觉技术的百褶裙动态美感评价[J]. 纺织学报, 2024, 45(12): 189-198.
[5] 任佳伟, 周其洪, 陈唱, 洪巍, 岑均豪. 基于机器视觉的交叉缠绕式筒子纱位姿检测方法[J]. 纺织学报, 2024, 45(11): 207-214.
[6] 陈育帆, 郑小虎, 徐修亮, 刘冰. 基于机器视觉的缝纫线迹缺陷检测方法[J]. 纺织学报, 2024, 45(07): 173-180.
[7] 王建萍, 沈津竹, 姚晓凤, 朱妍西, 张帆. 服装裁片自动抓取技术及其布局方法的研究进展[J]. 纺织学报, 2024, 45(06): 227-234.
[8] 文嘉琪, 李新荣, 冯文倩, 李瀚森. 印花面料的边缘轮廓快速提取方法[J]. 纺织学报, 2024, 45(05): 165-173.
[9] 杨金鹏, 景军锋, 李吉国, 王渊博. 基于机器视觉的玻璃纤维合股纱缺陷检测系统设计[J]. 纺织学报, 2024, 45(05): 193-201.
[10] 白恩龙, 张周强, 郭忠超, 昝杰. 基于机器视觉的棉花颜色检测方法[J]. 纺织学报, 2024, 45(03): 36-43.
[11] 葛苏敏, 林瑞冰, 徐平华, 吴思熠, 罗芊芊. 基于机器视觉的曲面枕个性化定制方法[J]. 纺织学报, 2024, 45(02): 214-220.
[12] 许高平, 孙以泽. 移动机械臂牵引卷装纱线的动态建模与控制[J]. 纺织学报, 2024, 45(01): 1-11.
[13] 史伟民, 韩思捷, 屠佳佳, 陆伟健, 段玉堂. 基于机器视觉的空纱筒口定位方法[J]. 纺织学报, 2023, 44(11): 105-112.
[14] 李新荣, 韩鹏辉, 李瑞芬, 贾坤, 路元江, 康雪峰. 数字孪生在纺纱领域应用的关键技术解析[J]. 纺织学报, 2023, 44(10): 214-222.
[15] 郑小虎, 刘正好, 陈峰, 张洁, 汪俊亮. 纺织工业智能发展现状与展望[J]. 纺织学报, 2023, 44(08): 205-216.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!