Journal of Textile Research ›› 2023, Vol. 44 ›› Issue (08): 181-188.doi: 10.13475/j.fzxb.20220608001

• Apparel Engineering • Previous Articles     Next Articles

Stitch quality detection method based on improved YOLOv4-Tiny

MA Chuangjia, QI Lizhe(), GAO Xiaofei, WANG Ziheng, SUN Yunquan   

  1. Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
  • Received:2022-06-30 Revised:2022-12-02 Online:2023-08-15 Published:2023-09-21

Abstract:

Objective It is reported that manual quality detection of sewing stitch is inefficient and that the existing algorithms are difficult to detect the sewing stitch and the detection is easily interfered by factors such as fabric wrinkles and illumination changes. An improved YOLOv4-Tiny object detection algorithm is proposed to recognize and locate the sewing stitch points. The number and position information of the sewing stitch points yielded by the model are used for sewing quality detection.

Method The convolutional attention mechanism improved by SoftPool was introduced in YOLOv4-Tiny to enhance the object detection network’s attention to the features of sewing stitch points, before a Soft-SPPF module composed of SoftPool was introduced in front of YOLO Head to realize the model's utilization of multi-scale features in prediction. The improved algorithm was used to calculate the number and coordinate information of all sewing stitch points in the sewing stitch image, together with the density and uniformity of stitch points.

Results The improved YOLOv4-Tiny algorithm and other object detection models were trained by data augmentation on a self-built sewing dataset (Fig. 2) and converge after 150 epochs of training (Fig. 7), and were tested on a test set. By comparing the improved YOLOv4-Tiny algorithm, MobileNet-SSD, YOLOv5s and the original YOLOv4 algorithm, the detection performance in sewing stitch points of the improved method was notably improved. The improved YOLO-Tiny algorithm achieved a mean average accuracy of 85.50% and a detection time of 15.9 ms (Tab. 3), which is more suitable than the original algorithm for sewing stitch detection. In addition (Fig. 8 and Tab. 2), the improved YOLOv4-Tiny algorithm was found to obtain accurately information on the number and coordinates of sewing stitch points in the images of the five sewing stitch types, and density and uniformity were obtained by calculating the number of stitch points per 10 cm and the average relative error between the distance of all adjacent stitch points and the average pixel distance. The quality results obtained from the detection were all within 0.6 stitches per 10 cm difference between density and manual detection, while the maximum difference in uniformity is 1.21%, proving the feasibility of this method. Finally, ablation experiments were carried out and the results are shown in (Tab. 5). The introduction of other modules significantly improved the detection performance of the YOLOv4-Tiny algorithm, and there was no significant decrease in the detection speed. The Soft-CBAM and Soft-SPPF after the introduction of SoftPool improved the detection performance of YOLOv4-Tiny more significantly than the original CBAM and SPPF, and the difference between the detection speed and the original modules was within 0.5 ms, which is a much better performance.

Conclusion The improved algorithm can achieve stitch point recognition despite interference from factors such as fabric folds and accurately obtain location information, which is suitable for sewing stitch detection scenarios. The proposed density and uniformity calculation method is verified by comparing with the manual inspection results, which can meet the actual inspection accuracy requirements and is significantly faster compared with the manual sizing inspection, which can effectively improve the production efficiency of the garment industry.

Key words: clothing production, sewing stitch, quality detection, YOLOv4-Tiny, convolutional attention mechanism, rapid spatial pyramid pooling, clothing quality

CLC Number: 

  • TP181

Fig. 1

Flow chart of sewing stitch quality test"

Tab. 1

Sewing stitch data set distribution statistics"

面料与缝
纫线颜色
不同类型线迹的图片数量/张
直线形 锯齿形 虚线
锯齿形
装饰缝 拼接缝
蓝色 11 14 9 14 13
褐色 13 12 16 13 15
灰色 11 8 12 11 7
粉色 4 5 5 4 3

Fig. 2

Sample drawing of sewing stitch type. (a) Straight; (b) Serrated; (c) Dash serrated; (d) Ornament; (e) Splicing"

Fig. 3

Improved YOLOv4-Tiny network structure"

Fig. 4

Soft-CBAM module structure and Soft-SPPF network architecture"

Fig. 5

Features image before (a) and after (b) adding Soft-CBAM module"

Fig. 6

Model training loss curve"

Fig. 7

Detection effect of improved YOLOv4-Tiny algorithm. (a) Original image; (b) Detection results"

Tab. 2

Comparison between automatic quality assessment and manual testing methods"

编号 针脚数量/个 线迹长度/mm 密度/(个·(10 cm) -1) 均匀度 R M R E /%
自动 人工 差值 自动 人工 差值 自动 人工 差值 自动 人工 差值
a 16 16 0 47.2 48.0 -0.8 33.93 33.33 0.60 5.04 6.25 -1.21
b 9 9 0 42.4 42.5 -0.1 21.24 21.18 0.06 4.98 4.47 0.51
c 3 3 0 31.9 32.5 -0.6 9.41 9.23 0.18 1.77 1.54 0.23
d 4 4 0 47.4 48.5 -1.1 8.44 8.25 0.19 2.23 1.24 0.99
e 3 3 0 32.3 32.5 -0.2 9.27 9.23 0.04 2.92 2.15 0.77

Tab. 3

Performance comparison of different object detection algorithms"

模型 PMAP /% F1值 检测时间/ms
MobileNet-SSD 70.31 0.772 20.3
YOLOv5s 66.96 0.744 19.1
YOLOv4-Tiny 72.68 0.796 13.5
改进的YOLOv4-Tiny 85.50 0.892 15.9

Tab. 4

Difference between quality detection results and manual detection based on YOLOv4-Tiny algorithm"

编号 线迹长度
差值/mm
密度差值/
(个·(10 cm) -1)
均匀度
差值/%
a -0.6 0.41 1.58
b 0.1 -0.07 1.63
c -0.7 0.21 0.62
d -1.6 0.28 2.07
e -0.3 -0.40 0.49

Tab. 5

Contribution of each module to detection algorithm"

编号 CBAM Soft-
CBAM
SPPF Soft-
SPPF
PMAP/
%
F1值 检测时
间/ms
a × × × × 72.68 0.796 13.5
b × × × 79.87 0.852 14.6
c × × × 81.04 0.862 14.4
d × × × 77.16 0.831 14.3
e × × × 79.17 0.838 14.7
[1] PATIL N S, RAJKUMAR S S, CHANDURKAR P W, et al. Minimization of defects in garment during stitching[J]. International Journal on Textile Engineering and Processes, 2017, 3(1):24-29.
[2] 马文景, 顾国达, 李建琴. 人工智能对纺织业国际分工地位提升的影响研究[J]. 丝绸, 2022, 59(6): 1-9.
MA Wenjing, GU Guoda, LI Jianqin. Research on the impact of artificial intelligence on international specialization status in textile industry[J]. Journal of Silk, 2022, 59(6): 1-9.
[3] 陈楠. 中国纺织业的现状与未来[J]. 纺织科学研究, 2019(12): 48-49.
CHEN Nan. The current situation and future of Chinese textile industry[J]. Textile Science Research, 2019(12): 48-49.
[4] 杨帆. 基于机器视觉的缝迹几何量检测与缺陷识别研究[D]. 西安: 西安理工大学, 2019: 21-28.
YANG Fan. Research on seam geometry detection and defect recognition classification based on machine vision[D]. Xi'an: Xi'an University of Technology, 2019: 21-28.
[5] JUNG W, KIM D, LEE H, et al. Appropriate smart factory for SMEs: concept, application and perspective[J]. International Journal of Precision Engineering and Manufacturing, 2021, 22(1): 201-215.
doi: 10.1007/s12541-020-00445-2
[6] 李彩林, 郭宝云, 贾象阳. 基于立体视觉的线迹质量自动检测方法[J]. 计算机辅助设计与图形学学报, 2015, 27(6): 1067-1073.
LI Cailin, GUO Baoyun, JIA Xiangyang. Quality auto-inspection method of stitch based on stereo vision[J]. Journal of Computer-Aided Design & Computer Graphics, 2015, 27(6): 1067-1073.
[7] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2014-09-04) [2021-05-06]. https://arxiv.org/abs/1409.1556.
[8] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// IEEE Conference on Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2016: 770-778.
[9] KIM H, JUNG W, PARK Y, et al. Broken stitch detection method for sewing operation using CNN feature map and image-processing techniques[J]. Expert Systems with Applications, 2022. DOI:10.1016/j.eswa.2021.116014.
doi: 10.1016/j.eswa.2021.116014
[10] 毕月. 图像处理与目标检测在护照缝纫线检测中的应用[D]. 上海: 上海交通大学, 2020: 48-77.
BI Yue. The application of image processing and target detection in passport sewing thread detection[D]. Shanghai: Shanghai Jiao Tong University, 2020: 48-77.
[11] WEN W, XIA F, XIA L. Real-time personnel counting of indoor area division based on improved YOLOv4-Tiny[C]// IECON 2021-47th Annual Conference of the IEEE Industrial Electronics Society. Piscataway, NJ: IEEE, 2021: 1-6.
[12] WOO S, PARK J, LEE J. Cbam: convolutional block attention module[C]// Proceedings of the European Conference on Computer Vision (ECCV). Berlin, German: Springer, 2018: 3-19.
[13] STERGIOU A, POPPE R, KALLIATAKIS G. Refining activation downsampling with SoftPool[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway, NJ: IEEE, 2021: 10337-10346.
[14] HE K, ZHANG X, REN S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916.
doi: 10.1109/TPAMI.2015.2389824 pmid: 26353135
[15] ZHANG Z. Flexible camera calibration by viewing a plane from unknown orientations[C]// Proceedings of the 7th IEEE international Conference on Computer Vision. Piscataway, NJ: IEEE, 1999: 666-673.
[1] ZHANG Ronggen, FENG Pei, LIU Dashuang, ZHANG Junping, YANG Chongchang. Research on on-line detection system of broken filaments in industrial polyester filament [J]. Journal of Textile Research, 2022, 43(04): 153-159.
[2] LI Zhong;HU Jue-liang;WU Qing-biao. Computer simulation on clothing production assembly line [J]. JOURNAL OF TEXTILE RESEARCH, 2005, 26(4): 118-120.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!