Journal of Textile Research ›› 2025, Vol. 46 ›› Issue (08): 89-95.doi: 10.13475/j.fzxb.20240902301

• Textile Engineering • Previous Articles     Next Articles

Plaid fabric image retrieval method based on deep feature fusion

ZHANG Xiaoting1,2, ZHAO Pengyu1, PAN Ruru2, GAO Weidong2()   

  1. 1. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu 214122, China
    2. College of Textile Science and Engineering, Jiangnan University, Wuxi, Jiangsu 214122, China
  • Received:2024-09-14 Revised:2025-04-25 Online:2025-08-15 Published:2025-08-15
  • Contact: GAO Weidong E-mail:gaowd3@163.com

Abstract:

Objective Fabric image retrieval is useful to search for similar fabric images already existing in a factory to obtain corresponding process parameters to guide production, and save manpower and material resources in repeated trial weaving process. The existing fabric image retrieval methods have not combined the characteristics of plaid fabric images, and the retrieval performance needs further improvement.

Method To improve the accuracy of plaid fabric image retrieval, a convolutional neural network (CNN) model that combines attention mechanism and hash coding was proposed based on the characteristics of plaid fabrics. By introducing attention mechanism into the CNN branch to focus on key information of fabric images, global and local depth features were extracted and fused by the feature fusion module. A hash encoding layer was built to compress the fused features to balance the retrieval precision and retrieval efficiency.

Results The proposed method was tested by establishing a new plaid fabric image retrieval dataset. Results showed that the average precision (P5) and recall(R5) of four categories were up to 77.3% and 55.2%, respectively, indicating the system's capability of retrieving the relevant images effectively. The average mean average precision (mAP) of 0.759 demonstrated the robustness of the system across all queries. In addition, there was a positive correlation between P5, R5, and mAP. The retrieval precisions of color block and academic were up to 83.1% and 86.5%, respectively. The retrieval precisions of the window and Welsh were relatively low, being 71.8% and 67.4%, respectively. The reason was that some window and Welsh plaids had a cycle period exceeding the image acquisition size of this study, which resulted in differences between images of the same fabric sample collected in different areas. The average P5 reaches 77.3%, suggesting that 3.865 out of the ranked top-5 images are correct. The requirement for enterprises to conduct fabric image retrieval is to quickly find similar images, and the proposed method was proven to meet the requirements of fabric manufacturing enterprises. Different low-order feature extraction methods and deep feature extraction methods were compared with the proposed method. Various types of plaid fabrics were with significant intra-class and inter-class differences. These low-level features heavily relied on manual design and the parameters were not universally applicable for different types of plaid fabrics. Although deep features could achieve good performance, the retrieval performance was still lower than the proposed method due to the lack of integration with the characteristics of plaid fabric images. The comparative experiments proved the adaptability and superiority of the proposed method for plaid image retrieval.

Conclusion A plaid fabric image retrieval method based on deep feature fusion was proposed. The attention mechanisms and hash encoding layer were involved into the CNN models to realize feature extraction, feature fusion and feature compression. Plaid fabric image retrieval was achieved by similarity measurement using the hamming distance. The results showed that the average P5, R5, and mAP could reach 77.3%, 55.2%, and 0.759, respectively, demonstrating the feasibility and effectiveness of the method. By comparing existing fabric image retrieval methods, the adaptability and superiority of the proposed method for plaid image retrieval are verified. The proposed method can help enterprises quickly search for existing product process parameters to guide production and improve design, production, and operational efficiency.

Key words: deep feature, convolutional neural network, attention mechanism, hash encoding, fabric image retrieval, plaid fabric image, production efficiency

CLC Number: 

  • TS101.8

Fig.1

Image samples of different types of plaid fabrics. (a) Color block;(b) Window;(c) Academic;(d) Welsh"

Fig.2

Feature extraction model of plaid fabric images"

Tab.1

Comparison of average retrieval performance of different network modules"

指标 P5/% R5/% ImAP
Res12 57.6 37.7 0.568
Res13 61.2 42.6 0.622
Res14 69.1 48.5 0.692
Res23 65.2 44.8 0.645
Res24 74.9 54.2 0.742
Res34 77.3 55.2 0.759

Fig.3

Illustration of orthogonal fusion module"

Tab.2

Performances comparison of different retrieval methods"

方法 色块格 窗格 学院格 威尔士格 平均值
P5/
%
R5/
%
ImAP P5/
%
R5/
%
ImAP P5/
%
R5/
%
ImAP P5/
%
R5/
%
ImAP P5/
%
R5/
%
ImAP 耗时/
ms
MRLBP 55.5 36.4 0.462 59.7 42.5 0.523 66.5 44.8 0.605 59.3 50.3 0.563 59.6 43.2 0.531 240.9
GCM 68.3 43.5 0.641 53.3 37.0 0.455 65.4 43.6 0.622 46.3 37.8 0.415 59.3 40.6 0.541 269.5
VTH 75.2 50.4 0.731 69.7 50.0 0.669 84.3 58.6 0.829 62.2 52.5 0.624 71.8 52.2 0.699 212.7
DOLG 82.1 55.5 0.785 70.6 50.5 0.669 83.8 57.7 0.802 67.8 57.8 0.675 75.3 54.9 0.725 210.3
ACD 76.6 51.4 0.709 70.9 51.1 0.692 76.8 53.8 0.758 66.7 56.3 0.685 72.4 53.0 0.706 30.6
本文 83.1 53.7 0.810 71.8 51.8 0.704 86.5 60.4 0.830 67.4 58.5 0.687 77.3 55.2 0.759 177.2

Fig.4

Retrieval results of plaid fabric images of color block(a), window(b), academic(c) and Welsh(d)"

[1] ZHANG Ning, XIANG Jun, WANG Lei, et al. Research progress of content-based fabric image retri-eval[J]. Textile Research Journal, 2023, 93(5/6): 1401-1418.
[2] ZHANG Lijin, LIU Xueliang, LU Zihong, et al. Lace fabric image retrieval based on multi-scale and rotation invariant LBP[C]// JAIN R,JIANG S, SMITH J, et al. Proceedings of the 7th International Conference on Internet Multimedia Computing and Service. New York: Association for Computing Machinery, 2015: 1-5.
[3] 赵文浩, 向军, 张宁, 等. 基于SURF和VLAD特征编码的面料图案检索研究[J]. 纺织学报, 2023, 44(8): 110-117.
ZHAO Wenhao, XIANG Jun, ZHANG Ning, et al. Research on fabric pattern retrieval based on SURF and VLAD feature coding[J]. Journal of Textile Research, 2023, 44(8): 110-117.
[4] ZHANG Ning, XIANG Jun, WANG Lei, et al. Image retrieval of wool fabric: Part II: based on low-level color features[J]. Textile Research Journal, 2020, 90(7/8): 797-808.
[5] 计蕴容, 周韦润. 基于多特征融合的面料图像检索系统[J]. 计算机与数字工程, 2021, 49(7): 1460-1464.
JI Yunrong, ZHOU Weirun. Fabric image retrieval system based on multi-feature fusion[J]. Computer & Digital Engineering, 2021, 49(7): 1460-1464.
[6] JING J, LI Q, LI P, et al. A new method of printed fabric image retrieval based on color moments and gist feature description[J]. Textile Research Journal, 2015, 86(11): 1137-1150.
[7] ZHANG Ning, XIANG Jun, WANG Lei, et al. Image retrieval of wool fabric: part III: based on aggregated convolutional descriptors and approximate nearest neighbors search[J]. Textile Research Journal, 2021, 92(3/4): 434-445.
[8] DUBEY S R, SINGH S K, CHU W T. Vision transformer hashing for image retrieval[C]// WONG L C, LIN C Y. 2022 IEEE International Conference on Multimedia and Expo (ICME). New York: Institute of Electrical and Electronic Engineers, 2022: 1-6.
[9] YANG M, HE D, FAN M, et al. DOLG: Single-stage image retrieval with deep orthogonal fusion of local and global features[C]// DICKINSON S. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). New York: Institute of Electrical and Electronic Engineers, 2021: 11752-11761.
[10] DENG J, GUO J, YANG J, et al. ArcFace: additive angular margin loss for deep face recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44: 5962-5979.
[1] LUO Ruiqi, CHANG Dashun, HU Xinrong, LIANG Jinxing, PENG Tao, CHEN Jia, LI Li. Cross-pose virtual try-on based on improved appearance flow network [J]. Journal of Textile Research, 2025, 46(06): 203-211.
[2] GU Mengshang, ZHANG Ning, PAN Ruru, GAO Weidong. Object detection of weaving fabric defects using frequency-domain convolution modules [J]. Journal of Textile Research, 2025, 46(05): 159-168.
[3] WANG Xiaodong, CHEN Junpeng, PEI Zeguang. Method for solving crosstalk in fabric pressure sensor array based on U-Net convolutional neural network [J]. Journal of Textile Research, 2024, 45(07): 86-93.
[4] LU Yinwen, HOU Jue, YANG Yang, GU Bingfei, ZHANG Hongwei, LIU Zheng. Single dress image video synthesis based on pose embedding and multi-scale attention [J]. Journal of Textile Research, 2024, 45(07): 165-172.
[5] HU Xudong, TANG Wei, ZENG Zhifa, RU Xin, PENG Laihu, LI Jianqiang, WANG Boping. Structure classification of weft-knitted fabric based on lightweight convolutional neural network [J]. Journal of Textile Research, 2024, 45(05): 60-69.
[6] GU Meihua, HUA Wei, DONG Xiaoxiao, ZHANG Xiaodan. Occlusive clothing image segmentation based on context extraction and attention fusion [J]. Journal of Textile Research, 2024, 45(05): 155-164.
[7] SHEN Chunya, FANG Liaoliao, PENG Laihu, LIANG Huijiang, DAI Ning, RU Xin. Production scheduling of warping department based on adaptive simulated annealing algorithm [J]. Journal of Textile Research, 2024, 45(03): 81-86.
[8] SHI Hongyu, WEI Yingjie, GUAN Shengqi, LI Yi. Cotton foreign fibers detection algorithm based on residual structure [J]. Journal of Textile Research, 2023, 44(12): 35-42.
[9] MA Chuangjia, QI Lizhe, GAO Xiaofei, WANG Ziheng, SUN Yunquan. Stitch quality detection method based on improved YOLOv4-Tiny [J]. Journal of Textile Research, 2023, 44(08): 181-188.
[10] YUAN Tiantian, WANG Xin, LUO Weihao, MEI Chennan, WEI Jingyan, ZHONG Yueqi. Three-dimensional virtual try-on network based on attention mechanism and vision transformer [J]. Journal of Textile Research, 2023, 44(07): 192-198.
[11] FU Han, HU Feng, GONG Jie, YU Lianqing. Defect reconstruction algorithm for fabric defect detection [J]. Journal of Textile Research, 2023, 44(07): 103-109.
[12] CHEN Jia, YANG Congcong, LIU Junping, HE Ruhan, LIANG Jinxing. Cross-domain generation for transferring hand-drawn sketches to garment images [J]. Journal of Textile Research, 2023, 44(01): 171-178.
[13] GU Meihua, LIU Jie, LI Liyao, CUI Lin. Clothing image segmentation method based on feature learning and attention mechanism [J]. Journal of Textile Research, 2022, 43(11): 163-171.
[14] ZHENG Xiaohu, LIU Zhenghao, CHEN Feng, LIU Zhifeng, WANG Junliang, HOU Xi, DING Siyi. Key technologies for full-process robotic automatic production in ring spinning [J]. Journal of Textile Research, 2022, 43(09): 11-20.
[15] JIN Shoufeng, HOU Yize, JIAO Hang, ZHNAG Peng, LI Yutao. An improved AlexNet model for fleece fabric quality inspection [J]. Journal of Textile Research, 2022, 43(06): 133-139.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!