纺织学报 ›› 2025, Vol. 46 ›› Issue (08): 89-95.doi: 10.13475/j.fzxb.20240902301

• 纺织工程 • 上一篇    下一篇

基于深度特征融合的格子织物图像检索方法

张晓婷1,2, 赵鹏宇1, 潘如如2, 高卫东2()   

  1. 1.江南大学 人工智能与计算机学院, 江苏 无锡 214122
    2.江南大学 纺织科学与工程学院, 江苏 无锡 214122
  • 收稿日期:2024-09-14 修回日期:2025-04-25 出版日期:2025-08-15 发布日期:2025-08-15
  • 通讯作者: 高卫东(1959—),男,教授,博士。主要研究方向为数字化纺织技术。E-mail:gaowd3@163.com
  • 作者简介:张晓婷(1982—),女,硕士。主要研究方向为格子织物图像检索。
  • 基金资助:
    国家自然科学基金青年科学基金项目(62202203)

Plaid fabric image retrieval method based on deep feature fusion

ZHANG Xiaoting1,2, ZHAO Pengyu1, PAN Ruru2, GAO Weidong2()   

  1. 1. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu 214122, China
    2. College of Textile Science and Engineering, Jiangnan University, Wuxi, Jiangsu 214122, China
  • Received:2024-09-14 Revised:2025-04-25 Published:2025-08-15 Online:2025-08-15

摘要:

为充分表征格子织物图像局部和全局信息,提高其检索精度,提出了一种结合注意力机制和哈希编码的卷积神经网络(CNN)模型,对格子织物的图像进行表征与检索。通过以现有的CNN模型作为骨干网络,将注意力机制引入CNN分支来聚焦关键信息,提取全局和局部深度特征,并采用正交融合模块进行全局和局部特征的融合。采用哈希编码层对融合特征进行压缩,并采用汉明距离对特征进行相似性度量,从而实现格子织物的图像检索,平衡检索精度和效率。为验证所提出的方法,从工厂收集了44 000多个织物样本,以建立格子织物图像检索数据集作为实验验证依据。结果表明,前5幅图像的准确率和召回率分别为77.3%和55.2%,平均检索精度0.759。证明所提方法的可行性和有效性,对比实验验证了本文方法的优越性,可为织物设计和生产提供参考。

关键词: 深度特征, 卷积神经网络, 注意力机制, 哈希编码, 织物图像检索, 格子织物图像, 生产效率

Abstract:

Objective Fabric image retrieval is useful to search for similar fabric images already existing in a factory to obtain corresponding process parameters to guide production, and save manpower and material resources in repeated trial weaving process. The existing fabric image retrieval methods have not combined the characteristics of plaid fabric images, and the retrieval performance needs further improvement.

Method To improve the accuracy of plaid fabric image retrieval, a convolutional neural network (CNN) model that combines attention mechanism and hash coding was proposed based on the characteristics of plaid fabrics. By introducing attention mechanism into the CNN branch to focus on key information of fabric images, global and local depth features were extracted and fused by the feature fusion module. A hash encoding layer was built to compress the fused features to balance the retrieval precision and retrieval efficiency.

Results The proposed method was tested by establishing a new plaid fabric image retrieval dataset. Results showed that the average precision (P5) and recall(R5) of four categories were up to 77.3% and 55.2%, respectively, indicating the system's capability of retrieving the relevant images effectively. The average mean average precision (mAP) of 0.759 demonstrated the robustness of the system across all queries. In addition, there was a positive correlation between P5, R5, and mAP. The retrieval precisions of color block and academic were up to 83.1% and 86.5%, respectively. The retrieval precisions of the window and Welsh were relatively low, being 71.8% and 67.4%, respectively. The reason was that some window and Welsh plaids had a cycle period exceeding the image acquisition size of this study, which resulted in differences between images of the same fabric sample collected in different areas. The average P5 reaches 77.3%, suggesting that 3.865 out of the ranked top-5 images are correct. The requirement for enterprises to conduct fabric image retrieval is to quickly find similar images, and the proposed method was proven to meet the requirements of fabric manufacturing enterprises. Different low-order feature extraction methods and deep feature extraction methods were compared with the proposed method. Various types of plaid fabrics were with significant intra-class and inter-class differences. These low-level features heavily relied on manual design and the parameters were not universally applicable for different types of plaid fabrics. Although deep features could achieve good performance, the retrieval performance was still lower than the proposed method due to the lack of integration with the characteristics of plaid fabric images. The comparative experiments proved the adaptability and superiority of the proposed method for plaid image retrieval.

Conclusion A plaid fabric image retrieval method based on deep feature fusion was proposed. The attention mechanisms and hash encoding layer were involved into the CNN models to realize feature extraction, feature fusion and feature compression. Plaid fabric image retrieval was achieved by similarity measurement using the hamming distance. The results showed that the average P5, R5, and mAP could reach 77.3%, 55.2%, and 0.759, respectively, demonstrating the feasibility and effectiveness of the method. By comparing existing fabric image retrieval methods, the adaptability and superiority of the proposed method for plaid image retrieval are verified. The proposed method can help enterprises quickly search for existing product process parameters to guide production and improve design, production, and operational efficiency.

Key words: deep feature, convolutional neural network, attention mechanism, hash encoding, fabric image retrieval, plaid fabric image, production efficiency

中图分类号: 

  • TS101.8

图1

不同种类的格子织物图像样本"

图2

格子织物图像特征提取模型"

表1

不同网络模块的平均检索性能对比"

指标 P5/% R5/% ImAP
Res12 57.6 37.7 0.568
Res13 61.2 42.6 0.622
Res14 69.1 48.5 0.692
Res23 65.2 44.8 0.645
Res24 74.9 54.2 0.742
Res34 77.3 55.2 0.759

图3

正交融合模块"

表2

不同检索方法的性能对比"

方法 色块格 窗格 学院格 威尔士格 平均值
P5/
%
R5/
%
ImAP P5/
%
R5/
%
ImAP P5/
%
R5/
%
ImAP P5/
%
R5/
%
ImAP P5/
%
R5/
%
ImAP 耗时/
ms
MRLBP 55.5 36.4 0.462 59.7 42.5 0.523 66.5 44.8 0.605 59.3 50.3 0.563 59.6 43.2 0.531 240.9
GCM 68.3 43.5 0.641 53.3 37.0 0.455 65.4 43.6 0.622 46.3 37.8 0.415 59.3 40.6 0.541 269.5
VTH 75.2 50.4 0.731 69.7 50.0 0.669 84.3 58.6 0.829 62.2 52.5 0.624 71.8 52.2 0.699 212.7
DOLG 82.1 55.5 0.785 70.6 50.5 0.669 83.8 57.7 0.802 67.8 57.8 0.675 75.3 54.9 0.725 210.3
ACD 76.6 51.4 0.709 70.9 51.1 0.692 76.8 53.8 0.758 66.7 56.3 0.685 72.4 53.0 0.706 30.6
本文 83.1 53.7 0.810 71.8 51.8 0.704 86.5 60.4 0.830 67.4 58.5 0.687 77.3 55.2 0.759 177.2

图4

不同类别格子织物图像的检索结果 注:各分图中从左到右为检索系统输出的6种结果排序,排序方式按相似性由低到高排列。"

[1] ZHANG Ning, XIANG Jun, WANG Lei, et al. Research progress of content-based fabric image retri-eval[J]. Textile Research Journal, 2023, 93(5/6): 1401-1418.
[2] ZHANG Lijin, LIU Xueliang, LU Zihong, et al. Lace fabric image retrieval based on multi-scale and rotation invariant LBP[C]// JAIN R,JIANG S, SMITH J, et al. Proceedings of the 7th International Conference on Internet Multimedia Computing and Service. New York: Association for Computing Machinery, 2015: 1-5.
[3] 赵文浩, 向军, 张宁, 等. 基于SURF和VLAD特征编码的面料图案检索研究[J]. 纺织学报, 2023, 44(8): 110-117.
ZHAO Wenhao, XIANG Jun, ZHANG Ning, et al. Research on fabric pattern retrieval based on SURF and VLAD feature coding[J]. Journal of Textile Research, 2023, 44(8): 110-117.
[4] ZHANG Ning, XIANG Jun, WANG Lei, et al. Image retrieval of wool fabric: Part II: based on low-level color features[J]. Textile Research Journal, 2020, 90(7/8): 797-808.
[5] 计蕴容, 周韦润. 基于多特征融合的面料图像检索系统[J]. 计算机与数字工程, 2021, 49(7): 1460-1464.
JI Yunrong, ZHOU Weirun. Fabric image retrieval system based on multi-feature fusion[J]. Computer & Digital Engineering, 2021, 49(7): 1460-1464.
[6] JING J, LI Q, LI P, et al. A new method of printed fabric image retrieval based on color moments and gist feature description[J]. Textile Research Journal, 2015, 86(11): 1137-1150.
[7] ZHANG Ning, XIANG Jun, WANG Lei, et al. Image retrieval of wool fabric: part III: based on aggregated convolutional descriptors and approximate nearest neighbors search[J]. Textile Research Journal, 2021, 92(3/4): 434-445.
[8] DUBEY S R, SINGH S K, CHU W T. Vision transformer hashing for image retrieval[C]// WONG L C, LIN C Y. 2022 IEEE International Conference on Multimedia and Expo (ICME). New York: Institute of Electrical and Electronic Engineers, 2022: 1-6.
[9] YANG M, HE D, FAN M, et al. DOLG: Single-stage image retrieval with deep orthogonal fusion of local and global features[C]// DICKINSON S. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). New York: Institute of Electrical and Electronic Engineers, 2021: 11752-11761.
[10] DENG J, GUO J, YANG J, et al. ArcFace: additive angular margin loss for deep face recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44: 5962-5979.
[1] 罗瑞奇, 常大顺, 胡新荣, 梁金星, 彭涛, 陈佳, 李丽. 基于改进外观流网络的跨体态虚拟试衣[J]. 纺织学报, 2025, 46(06): 203-211.
[2] 顾孟尚, 张宁, 潘如如, 高卫东. 结合频域卷积模块的机织物图像疵点目标检测[J]. 纺织学报, 2025, 46(05): 159-168.
[3] 王小东, 陈俊鹏, 裴泽光. 基于U-Net卷积神经网络的织物压力传感阵列串扰解决方法[J]. 纺织学报, 2024, 45(07): 86-93.
[4] 陆寅雯, 侯珏, 杨阳, 顾冰菲, 张宏伟, 刘正. 基于姿态嵌入机制和多尺度注意力的单张着装图像视频合成[J]. 纺织学报, 2024, 45(07): 165-172.
[5] 胡旭东, 汤炜, 曾志发, 汝欣, 彭来湖, 李建强, 王博平. 基于轻量化卷积神经网络的纬编针织物组织结构分类[J]. 纺织学报, 2024, 45(05): 60-69.
[6] 顾梅花, 花玮, 董晓晓, 张晓丹. 基于上下文提取与注意力融合的遮挡服装图像分割[J]. 纺织学报, 2024, 45(05): 155-164.
[7] 沈春娅, 方辽辽, 彭来湖, 梁汇江, 戴宁, 汝欣. 基于自适应模拟退火算法的整经准备车间排产模型[J]. 纺织学报, 2024, 45(03): 81-86.
[8] 师红宇, 位营杰, 管声启, 李怡. 基于残差结构的棉花异性纤维检测算法[J]. 纺织学报, 2023, 44(12): 35-42.
[9] 马创佳, 齐立哲, 高晓飞, 王子恒, 孙云权. 基于改进YOLOv4-Tiny的缝纫线迹质量检测方法[J]. 纺织学报, 2023, 44(08): 181-188.
[10] 袁甜甜, 王鑫, 罗炜豪, 梅琛楠, 韦京艳, 钟跃崎. 基于注意力机制和视觉转换器的三维虚拟试衣网络[J]. 纺织学报, 2023, 44(07): 192-198.
[11] 付晗, 胡峰, 龚杰, 余联庆. 面向织物疵点检测的缺陷重构方法[J]. 纺织学报, 2023, 44(07): 103-109.
[12] 陈佳, 杨聪聪, 刘军平, 何儒汉, 梁金星. 手绘草图到服装图像的跨域生成[J]. 纺织学报, 2023, 44(01): 171-178.
[13] 顾梅花, 刘杰, 李立瑶, 崔琳. 结合特征学习与注意力机制的服装图像分割[J]. 纺织学报, 2022, 43(11): 163-171.
[14] 郑小虎, 刘正好, 陈峰, 刘志峰, 汪俊亮, 侯曦, 丁司懿. 环锭纺纱全流程机器人自动化生产关键技术[J]. 纺织学报, 2022, 43(09): 11-20.
[15] 金守峰, 侯一泽, 焦航, 张鹏, 李宇涛. 基于改进AlexNet模型的抓毛织物质量检测方法[J]. 纺织学报, 2022, 43(06): 133-139.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!