纺织学报 ›› 2023, Vol. 44 ›› Issue (08): 181-188.doi: 10.13475/j.fzxb.20220608001

• 服装工程 • 上一篇    下一篇

基于改进YOLOv4-Tiny的缝纫线迹质量检测方法

马创佳, 齐立哲(), 高晓飞, 王子恒, 孙云权   

  1. 复旦大学 工程与应用技术研究院, 上海 200433
  • 收稿日期:2022-06-30 修回日期:2022-12-02 出版日期:2023-08-15 发布日期:2023-09-21
  • 通讯作者: 齐立哲(1981—),男,研究员,博士。主要研究方向为机器人视觉感知。E-mail:qilizhe@fudan.edu.cn
  • 作者简介:马创佳(1996—),男,硕士生。主要研究方向为缝纫质量的图像检测。
  • 基金资助:
    上海市市级科技重大专项项目(2021SHZDZX0103);广东季华实验室重大共性关键技术及应用示范研究科研项目(Y80311W180)

Stitch quality detection method based on improved YOLOv4-Tiny

MA Chuangjia, QI Lizhe(), GAO Xiaofei, WANG Ziheng, SUN Yunquan   

  1. Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
  • Received:2022-06-30 Revised:2022-12-02 Published:2023-08-15 Online:2023-09-21

摘要:

针对人工检测缝纫线迹质量效率低下、当前算法在缝纫线迹质量检测应用上难以检测与面料颜色相近的线迹以及易受面料褶皱、光照变化等因素干扰的问题,提出一种改进的YOLOv4-Tiny目标检测模型,实现缝纫线迹针脚点的识别和定位,进而实现质量检测。首先在YOLOv4-Tiny中引入用SoftPool改进的卷积注意力机制,加强网络对线迹特征的注意;然后在YOLO检测头前引入由SoftPool组成的Soft-SPPF模块,实现模型在检测中对多尺度特征的利用;最后,利用改进后的算法输出针脚点的数量和坐标信息,计算线迹针脚点的密度和均匀度。实验结果表明:在自建数据集上,所提算法的平均精度达到85.50%,检测时间为15.9 ms,相比原算法和常用的目标检测模型更适用于缝纫线迹检测,且该方法计算所得的线迹密度结果与人工检测的差值在0.6针/(10 cm)内,均匀度计算结果相近,满足实际检测精度要求。

关键词: 缝纫线迹, 质量检测, YOLOv4-Tiny, 卷积注意力机制, 快速空间金字塔池化, 服装质量

Abstract:

Objective It is reported that manual quality detection of sewing stitch is inefficient and that the existing algorithms are difficult to detect the sewing stitch and the detection is easily interfered by factors such as fabric wrinkles and illumination changes. An improved YOLOv4-Tiny object detection algorithm is proposed to recognize and locate the sewing stitch points. The number and position information of the sewing stitch points yielded by the model are used for sewing quality detection.

Method The convolutional attention mechanism improved by SoftPool was introduced in YOLOv4-Tiny to enhance the object detection network’s attention to the features of sewing stitch points, before a Soft-SPPF module composed of SoftPool was introduced in front of YOLO Head to realize the model's utilization of multi-scale features in prediction. The improved algorithm was used to calculate the number and coordinate information of all sewing stitch points in the sewing stitch image, together with the density and uniformity of stitch points.

Results The improved YOLOv4-Tiny algorithm and other object detection models were trained by data augmentation on a self-built sewing dataset (Fig. 2) and converge after 150 epochs of training (Fig. 7), and were tested on a test set. By comparing the improved YOLOv4-Tiny algorithm, MobileNet-SSD, YOLOv5s and the original YOLOv4 algorithm, the detection performance in sewing stitch points of the improved method was notably improved. The improved YOLO-Tiny algorithm achieved a mean average accuracy of 85.50% and a detection time of 15.9 ms (Tab. 3), which is more suitable than the original algorithm for sewing stitch detection. In addition (Fig. 8 and Tab. 2), the improved YOLOv4-Tiny algorithm was found to obtain accurately information on the number and coordinates of sewing stitch points in the images of the five sewing stitch types, and density and uniformity were obtained by calculating the number of stitch points per 10 cm and the average relative error between the distance of all adjacent stitch points and the average pixel distance. The quality results obtained from the detection were all within 0.6 stitches per 10 cm difference between density and manual detection, while the maximum difference in uniformity is 1.21%, proving the feasibility of this method. Finally, ablation experiments were carried out and the results are shown in (Tab. 5). The introduction of other modules significantly improved the detection performance of the YOLOv4-Tiny algorithm, and there was no significant decrease in the detection speed. The Soft-CBAM and Soft-SPPF after the introduction of SoftPool improved the detection performance of YOLOv4-Tiny more significantly than the original CBAM and SPPF, and the difference between the detection speed and the original modules was within 0.5 ms, which is a much better performance.

Conclusion The improved algorithm can achieve stitch point recognition despite interference from factors such as fabric folds and accurately obtain location information, which is suitable for sewing stitch detection scenarios. The proposed density and uniformity calculation method is verified by comparing with the manual inspection results, which can meet the actual inspection accuracy requirements and is significantly faster compared with the manual sizing inspection, which can effectively improve the production efficiency of the garment industry.

Key words: clothing production, sewing stitch, quality detection, YOLOv4-Tiny, convolutional attention mechanism, rapid spatial pyramid pooling, clothing quality

中图分类号: 

  • TP181

图1

缝纫线迹质量检测流程图"

表1

缝纫线迹数据集分布统计"

面料与缝
纫线颜色
不同类型线迹的图片数量/张
直线形 锯齿形 虚线
锯齿形
装饰缝 拼接缝
蓝色 11 14 9 14 13
褐色 13 12 16 13 15
灰色 11 8 12 11 7
粉色 4 5 5 4 3

图2

缝纫线迹样例图"

图3

改进后的YOLOv4-Tiny网络结构"

图4

Soft-CBAM模块结构和Soft-SPPF网络结构"

图5

添加Soft-CBAM模块前后的特征图"

图6

模型训练损失曲线"

图7

改进的YOLOv4-Tiny检测效果"

表2

自动质量检测与人工检测方法效果对比"

编号 针脚数量/个 线迹长度/mm 密度/(个·(10 cm) -1) 均匀度 R M R E /%
自动 人工 差值 自动 人工 差值 自动 人工 差值 自动 人工 差值
a 16 16 0 47.2 48.0 -0.8 33.93 33.33 0.60 5.04 6.25 -1.21
b 9 9 0 42.4 42.5 -0.1 21.24 21.18 0.06 4.98 4.47 0.51
c 3 3 0 31.9 32.5 -0.6 9.41 9.23 0.18 1.77 1.54 0.23
d 4 4 0 47.4 48.5 -1.1 8.44 8.25 0.19 2.23 1.24 0.99
e 3 3 0 32.3 32.5 -0.2 9.27 9.23 0.04 2.92 2.15 0.77

表3

不同目标检测算法性能对比"

模型 PMAP /% F1值 检测时间/ms
MobileNet-SSD 70.31 0.772 20.3
YOLOv5s 66.96 0.744 19.1
YOLOv4-Tiny 72.68 0.796 13.5
改进的YOLOv4-Tiny 85.50 0.892 15.9

表4

YOLOv4-Tiny算法质量检测结果与人工检测的差值"

编号 线迹长度
差值/mm
密度差值/
(个·(10 cm) -1)
均匀度
差值/%
a -0.6 0.41 1.58
b 0.1 -0.07 1.63
c -0.7 0.21 0.62
d -1.6 0.28 2.07
e -0.3 -0.40 0.49

表5

各模块对检测算法的贡献"

编号 CBAM Soft-
CBAM
SPPF Soft-
SPPF
PMAP/
%
F1值 检测时
间/ms
a × × × × 72.68 0.796 13.5
b × × × 79.87 0.852 14.6
c × × × 81.04 0.862 14.4
d × × × 77.16 0.831 14.3
e × × × 79.17 0.838 14.7
[1] PATIL N S, RAJKUMAR S S, CHANDURKAR P W, et al. Minimization of defects in garment during stitching[J]. International Journal on Textile Engineering and Processes, 2017, 3(1):24-29.
[2] 马文景, 顾国达, 李建琴. 人工智能对纺织业国际分工地位提升的影响研究[J]. 丝绸, 2022, 59(6): 1-9.
MA Wenjing, GU Guoda, LI Jianqin. Research on the impact of artificial intelligence on international specialization status in textile industry[J]. Journal of Silk, 2022, 59(6): 1-9.
[3] 陈楠. 中国纺织业的现状与未来[J]. 纺织科学研究, 2019(12): 48-49.
CHEN Nan. The current situation and future of Chinese textile industry[J]. Textile Science Research, 2019(12): 48-49.
[4] 杨帆. 基于机器视觉的缝迹几何量检测与缺陷识别研究[D]. 西安: 西安理工大学, 2019: 21-28.
YANG Fan. Research on seam geometry detection and defect recognition classification based on machine vision[D]. Xi'an: Xi'an University of Technology, 2019: 21-28.
[5] JUNG W, KIM D, LEE H, et al. Appropriate smart factory for SMEs: concept, application and perspective[J]. International Journal of Precision Engineering and Manufacturing, 2021, 22(1): 201-215.
doi: 10.1007/s12541-020-00445-2
[6] 李彩林, 郭宝云, 贾象阳. 基于立体视觉的线迹质量自动检测方法[J]. 计算机辅助设计与图形学学报, 2015, 27(6): 1067-1073.
LI Cailin, GUO Baoyun, JIA Xiangyang. Quality auto-inspection method of stitch based on stereo vision[J]. Journal of Computer-Aided Design & Computer Graphics, 2015, 27(6): 1067-1073.
[7] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2014-09-04) [2021-05-06]. https://arxiv.org/abs/1409.1556.
[8] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// IEEE Conference on Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2016: 770-778.
[9] KIM H, JUNG W, PARK Y, et al. Broken stitch detection method for sewing operation using CNN feature map and image-processing techniques[J]. Expert Systems with Applications, 2022. DOI:10.1016/j.eswa.2021.116014.
doi: 10.1016/j.eswa.2021.116014
[10] 毕月. 图像处理与目标检测在护照缝纫线检测中的应用[D]. 上海: 上海交通大学, 2020: 48-77.
BI Yue. The application of image processing and target detection in passport sewing thread detection[D]. Shanghai: Shanghai Jiao Tong University, 2020: 48-77.
[11] WEN W, XIA F, XIA L. Real-time personnel counting of indoor area division based on improved YOLOv4-Tiny[C]// IECON 2021-47th Annual Conference of the IEEE Industrial Electronics Society. Piscataway, NJ: IEEE, 2021: 1-6.
[12] WOO S, PARK J, LEE J. Cbam: convolutional block attention module[C]// Proceedings of the European Conference on Computer Vision (ECCV). Berlin, German: Springer, 2018: 3-19.
[13] STERGIOU A, POPPE R, KALLIATAKIS G. Refining activation downsampling with SoftPool[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway, NJ: IEEE, 2021: 10337-10346.
[14] HE K, ZHANG X, REN S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916.
doi: 10.1109/TPAMI.2015.2389824 pmid: 26353135
[15] ZHANG Z. Flexible camera calibration by viewing a plane from unknown orientations[C]// Proceedings of the 7th IEEE international Conference on Computer Vision. Piscataway, NJ: IEEE, 1999: 666-673.
[1] 金守峰, 侯一泽, 焦航, 张鹏, 李宇涛. 基于改进AlexNet模型的抓毛织物质量检测方法[J]. 纺织学报, 2022, 43(06): 133-139.
[2] 张荣根, 冯培, 刘大双, 张俊平, 杨崇倡. 涤纶工业长丝毛丝在线检测系统的研究[J]. 纺织学报, 2022, 43(04): 153-159.
[3] 王晓红;张义松;聂多木;张瑞云. 织物印花质量在线检测与控制系统的设计[J]. 纺织学报, 2010, 31(2): 81-84.
[4] 陈俊杰;谢春萍. 基于神经网络的织物疵点识别技术[J]. 纺织学报, 2006, 27(4): 36-38.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!