纺织学报 ›› 2026, Vol. 47 ›› Issue (1): 72-79.doi: 10.13475/j.fzxb.20250500101

• 纤维材料 • 上一篇    下一篇

基于改进U-Mamba网络的聚酯纤维超微结构分割算法

周宇1, 隗兵1(), 郝矿荣1, 皋磊2, 王华平3,4   

  1. 1.东华大学 信息与智能科学学院, 上海 201620
    2.澳大利亚联邦科学与工业研究组织, 澳大利亚格伦奥斯蒙德 5064
    3.东华大学 材料科学与工程学院, 上海 201620
    4.东华大学 先进纤维材料全国重点实验室, 上海 201620
  • 收稿日期:2025-05-06 修回日期:2025-11-04 出版日期:2026-01-15 发布日期:2026-01-15
  • 通讯作者: 隗兵(1990—),男,助理教授,博士。主要研究方向为生物启发计算、视觉与类脑智能、数字化纺织与纤维超微结构检测。E-mail: bingwei@dhu.edu.cn
  • 作者简介:周宇(2002—),男,硕士生。主要研究方向为机器视觉和纤维超微结构检测。
  • 基金资助:
    中央高校基本科研业务费专项资金资助项目(2232025G-02);中央高校基本科研业务费专项资金资助项目(2232025G-09);学科创新培育计划项目(XKCX202313)

Polyester fiber ultrastructure segmentation algorithm based on improved U-Mamba network

ZHOU Yu1, WEI Bing1(), HAO Kuangrong1, GAO Lei2, WANG Huaping3,4   

  1. 1. School of Information and Intelligent Science, Donghua University, Shanghai 201620, China
    2. Commonwealth Scientific and Industrial Research Organisation, Glen Osmond 5064, Australia
    3. College of Materials Science and Engineering, Donghua University, Shanghai 201620, China
    4. State Key Laboratory of Advanced Fiber Materials, Donghua University, Shanghai 201620, China
  • Received:2025-05-06 Revised:2025-11-04 Published:2026-01-15 Online:2026-01-15

摘要:

针对工业生产中聚酯纤维超微结构中的团聚效应对产品的颜色均匀性、机械性能均匀性及光泽度等性能的负面影响这一问题,提出一种基于改进U-Mamba的聚酯纤维超微结构分割算法。首先利用扫描电子显微镜采集聚酯纤维超微结构中团聚粒子分布的高分辨率图像并建立对应的超微结构数据集以评估模型性能,使用结合边缘检测算法的预训练神经网络对纤维图像进行去噪、滤波以及自动着色处理,通过设计高阶视觉状态空间模块和多尺度信息融合模块,改进后的U-Mamba深度网络模型能够准确识别并分割超微结构中团聚体。实验结果表明:在超微结构数据集下,该算法对比其它经典算法具有较高的分割准确性,其交并比达到78.9%,平均准确率达到96.1%,能够满足工业生产中机器视觉技术在高功能纤维超微结构分析中的应用需求。

关键词: 聚酯纤维, 超微结构分布, 机器视觉, U-Mamba算法, 语义分割

Abstract:

Objective In order to address the product performance degradation caused by agglomeration in the ultrastructure of polyester fibers, this research aims to propose an improved U-Mamba segmentation algorithm integrated with a high-order visual state space module and a multi-scale fusion module. The algorithm achieves accurate identification and segmentation of agglomerates, providing technical support for industrial machine vision-based ultrastructure analysis and meeting the application requirements of machine vision technology in the ultrastructural analysis of high-performance fibers in industrial production.

Method In order to address the issue of agglomeration effects in the ultrastructure of polyester fibers during industrial production, which negatively impacts product properties such as color uniformity, mechanical consistency, and gloss, a polyester fiber ultramicrostructure segmentation algorithm based on improved U-Mamba network was proposed. First, high-resolution images of agglomerated particle distributions in the ultramicrostructure of polyester fibers were acquired using the GeminiSEM 560 scanning electron microscope, and a corresponding dataset was constructed to evaluate the model's performance. A pre-trained neural network integrated with edge detection algorithms was employed to denoise, filter, and automatically colorize the fiber images. Then, an improved deep network model based on U-Mamba was adopted to accurately identify and segment agglomerates in the ultramicrostructure.

Results A polyester fiber ultrastructure dataset was established, and the proposed model was compared with five mainstream segmentation models, which are DeepLabV3, UNet, AttUNet, TransUNet, and SwinUNet. The proposed models demonstrated superior segmentation performance across 5 evaluation metrics, i.e.intersection over onion(IoU), dice similarity coefficient (DSC), accuracy (Acc), specificity (Spe), and sensitivity (Sen). Specifically, the model achieved an IoU of 78.9%, DSC of 88.2%, Acc of 96.1%, Spe of 97.4%, and Sen of 89.1%, indicating excellent capability in segmenting aggregates within the ultramicrostructure. Furthermore, ablation studies were conducted to assess the contributions of the improved high-order visual state space module and the multi-scale information fusion module to the overall segmentation performance. The results demonstrated that removing the module resulted in a 3.4% decrease in IoU, while omitting the module caused a 2.2% reduction. When both modules were removed, the IoU decreased by 4.5%, highlighting the crucial role of these modules in enhancing segmentation performance. Finally, in order to visually compare the segmentation results of different algorithms on the proposed dataset, visualization experiments were performed. The findings indicated that, relative to other models, the proposed method more accurately identifies and segments abnormal aggregates, contributing a novel approach to the application of neural networks in the segmentation of polyester fiber ultramicrostructures.

Conclusion This paper proposed an improved U-Mamba based segmentation algorithm for polyester fiber ultrastructure. Specifically tailored to the research requirements of ultrastructural analysis, a dedicated dataset of polyester fiber ultrastructure was constructed. During the image preprocessing stage, a pretrained neural network integrated with edge detection algorithms was employed to perform denoising, filtering, and auto-coloring on fiber ultrastructure images to facilitate subsequent segmentation. The key innovation lies in the design of a high-order visual state space module, which introduces higher-order operations into semantic segmentation. This module maintains the global receptive field advantages of SS2D while minimizing redundant information. Furthermore, convolutional blocks are embedded within the visual state space module, effectively combining the feature extraction capabilities of both convolutional operations and SS2D to enrich multi-level feature representations. Additionally, a multi-level multi-scale feature fusion module was designed incorporating channel attention and spatial attention mechanisms to enhance feature diversity during decoder fusion. Experimental results demonstrate that the proposed model achieves superior segmentation performance on the polyester fiber ultrastructure dataset compared to existing methods, while maintaining high segmentation accuracy. The integration of computer vision techniques for polyester fiber ultrastructure analysis represents a future trend in intelligent industrial production. This approach not only improves working conditions by replacing manual inspection of microscopic fiber defects but also enhances detection efficiency in practical manufacturing. Our algorithm successfully identifies and segments agglomerates within the ultrastructure, showing potential for applications in fiber material defect detection. The proposed method also provides insights for embedded device deployment, which will be the focus of future research.

Key words: polyester fiber, ultrastructure distribution, machine vision, U-Mamba algorithm, semantic segmentation

中图分类号: 

  • TP391.4

图1

聚酯纤维超微结构的电镜照片"

图2

预处理神经网络模型结构"

图3

数据集的示例"

图4

分割流程图"

图5

整体网络结构"

图6

多尺度信息融合模块结构"

图7

高阶视觉状态空间模块结构"

表1

对比实验结果"

方法 评估指标
IoU DSC Acc Spe Sen
DeepLabV3[11] 67.4 81.2 92.4 93.4 82.9
Unet[6] 70.1 82.4 93.3 95.9 82.5
AttUnet[12] 72.4 84.0 94.1 96.9 81.3
TransUnet[13] 72.9 84.4 93.9 95.8 86.2
SwinUnet[14] 73.5 84.7 94.2 96.4 84.9
本文方法 78.9 88.2 96.1 97.4 89.1

表2

消融实验结果"

高阶状态
空间模块
多尺度信息
融合模块
评估指标
IoU DSC Acc Spe Sen
78.9 88.2 96.1 97.4 89.1
× 75.5 86.0 94.5 95.6 87.2
× 76.7 87.7 95.3 97.1 87.9
× × 74.4 85.3 94.2 95.5 86.5

图8

不同模型在聚酯纤维超微结构数据集上的分割结果"

[1] 宋伟广, 王冬, 杜长森, 等. 原液着色聚酯纤维原位聚合用自分散纳米炭黑的制备及其性能[J]. 纺织学报, 2023, 44(4): 115-123.
SONG Weiguang, WANG Dong, DU Changsen, et al. Preparation and properties of self-dispersed nanoscale carbon black for in situ polymerization of spun-dyed polyester fiber[J]. Journal of Textile Research, 2023, 44(4): 115-123.
[2] 白恩龙, 张周强, 郭忠超, 等. 基于机器视觉的棉花颜色检测方法[J]. 纺织学报, 2024, 45(3): 36-43.
BAI Enlong, ZHANG Zhouqiang, GUO Zhongchao, et al. Cotton color detection method based on machine vision[J]. Journal of Textile Research, 2024, 45(3): 36-43.
[3] 杨金鹏, 景军锋, 李吉国, 等. 基于机器视觉的玻璃纤维合股纱缺陷检测系统设计[J]. 纺织学报, 2024, 45(5): 193-201.
YANG Jinpeng, JING Junfeng, LI Jiguo, et al. Design of defect detection system for glass fiber plied yarn based on machine vision[J]. Journal of Textile Research, 2024, 45(5): 193-201.
[4] MA J, LI F F, WANG B. U-mamba: enhancing long-range dependency for biomedical image segment-ation[EB/OL]. (2024-01-09)[2025-05-04]. https://arxiv.org/abs/2401.04722.
[5] ZHANG R, ISOLA P, EFROS A A. Colorful image colorization[M]//Computer vision-ECCV 2016. Cham: Springer International Publishing, 2016: 649-666.
[6] RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]// Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. Cham: Springer, 2015: 234-241.
[7] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]// Computer Vision - ECCV 2018. Cham: Springer, 2018: 3-19.
[8] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J/OL] // Advances in Neural Information Processing Systems, 2017, 30: 5998-6008.
[9] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
doi: 10.1109/5.726791
[10] 文嘉琪, 李新荣, 冯文倩, 等. 印花面料的边缘轮廓快速提取方法[J]. 纺织学报, 2024, 45(5): 165-173.
WEN Jiaqi, LI Xinrong, FENG Wenqian, et al. Rapid extraction of edge contours of printed fabrics[J]. Journal of Textile Research, 2024, 45(5): 165-173.
[11] CHEN L C, PAPANDREOU G, SCHROFF F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL]. (2017-06-17)[2025-05-04]. https://arxiv.org/abs/1706.05587.
[12] OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: learning where to look for the pancreas[EB/OL]. (2018-04-11)[2025-05-04]. https://arxiv.org/abs/1804.03999.
[13] CHEN J N, LU Y Y, YU Q H, et al. TransUNet: transformers make strong encoders for medical image segmentation[EB/OL]. (2021-02-18)[2025-05-04]. https://arxiv.org/abs/2102.04306.
[14] CAO H, WANG Y Y, CHEN J, et al. Swin-unet: unet-like pure transformer for medical image segmen-tation[M]//Computer vision-ECCV 2022 workshops. Cham: Springer Nature Switzerland, 2023: 205-218.
[1] 庹武, 刘琼洋, 李庆响, 陈谦, 范睿鸽, 李佩. 基于机器视觉与YOLO11n的女西裤尺寸自动测量[J]. 纺织学报, 2025, 46(12): 208-215.
[2] 吴威涛, 韩奥博, 牛奎, 贾建辉, 尹邦雄, 向忠. 基于高泛化性图像生成与分类算法的织物瑕疵检测系统设计[J]. 纺织学报, 2025, 46(10): 227-236.
[3] 任志墨, 张文昌, 李贞逸, 叶贺, 杨春柳, 张倩. 基于双目结构光的纺织圆柱件三维视觉定位方法[J]. 纺织学报, 2025, 46(07): 227-235.
[4] 许纶有, 邹鲲, 吴浩男. 基于机器视觉的浆纱机经轴区断纱故障检测[J]. 纺织学报, 2025, 46(06): 231-239.
[5] 李吉国, 景军锋, 程为, 王永波, 刘薇. 基于机器视觉的玻璃纤维纱团外观缺陷检测系统设计[J]. 纺织学报, 2025, 46(05): 243-251.
[6] 王容容, 周洲, 冯祥, 申莹, 刘峰, 邢剑. 聚酯纤维与聚乙烯/聚丙烯双组分纤维多孔吸声材料的制备及其性能[J]. 纺织学报, 2025, 46(02): 61-68.
[7] 任柯, 周衡书, 魏瑾瑜, 闫文君, 左言文. 基于机器视觉技术的百褶裙动态美感评价[J]. 纺织学报, 2024, 45(12): 189-198.
[8] 任佳伟, 周其洪, 陈唱, 洪巍, 岑均豪. 基于机器视觉的交叉缠绕式筒子纱位姿检测方法[J]. 纺织学报, 2024, 45(11): 207-214.
[9] 王玉玺, 唐春霞, 张丽平, 付少海. 纳米炭黑的Steglich酯化反应制备及乙二醇分散性[J]. 纺织学报, 2024, 45(07): 104-111.
[10] 陈育帆, 郑小虎, 徐修亮, 刘冰. 基于机器视觉的缝纫线迹缺陷检测方法[J]. 纺织学报, 2024, 45(07): 173-180.
[11] 文嘉琪, 李新荣, 冯文倩, 李瀚森. 印花面料的边缘轮廓快速提取方法[J]. 纺织学报, 2024, 45(05): 165-173.
[12] 杨金鹏, 景军锋, 李吉国, 王渊博. 基于机器视觉的玻璃纤维合股纱缺陷检测系统设计[J]. 纺织学报, 2024, 45(05): 193-201.
[13] 白恩龙, 张周强, 郭忠超, 昝杰. 基于机器视觉的棉花颜色检测方法[J]. 纺织学报, 2024, 45(03): 36-43.
[14] 范博, 吴伟, 王健, 徐红, 毛志平. 分散染料在超临界CO2流体染色聚酯纤维中的扩散行为[J]. 纺织学报, 2024, 45(02): 134-141.
[15] 葛苏敏, 林瑞冰, 徐平华, 吴思熠, 罗芊芊. 基于机器视觉的曲面枕个性化定制方法[J]. 纺织学报, 2024, 45(02): 214-220.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!