Journal of Textile Research ›› 2026, Vol. 47 ›› Issue (1): 72-79.doi: 10.13475/j.fzxb.20250500101

• Fiber Materials • Previous Articles     Next Articles

Polyester fiber ultrastructure segmentation algorithm based on improved U-Mamba network

ZHOU Yu1, WEI Bing1(), HAO Kuangrong1, GAO Lei2, WANG Huaping3,4   

  1. 1. School of Information and Intelligent Science, Donghua University, Shanghai 201620, China
    2. Commonwealth Scientific and Industrial Research Organisation, Glen Osmond 5064, Australia
    3. College of Materials Science and Engineering, Donghua University, Shanghai 201620, China
    4. State Key Laboratory of Advanced Fiber Materials, Donghua University, Shanghai 201620, China
  • Received:2025-05-06 Revised:2025-11-04 Online:2026-01-15 Published:2026-01-15
  • Contact: WEI Bing E-mail:bingwei@dhu.edu.cn

Abstract:

Objective In order to address the product performance degradation caused by agglomeration in the ultrastructure of polyester fibers, this research aims to propose an improved U-Mamba segmentation algorithm integrated with a high-order visual state space module and a multi-scale fusion module. The algorithm achieves accurate identification and segmentation of agglomerates, providing technical support for industrial machine vision-based ultrastructure analysis and meeting the application requirements of machine vision technology in the ultrastructural analysis of high-performance fibers in industrial production.

Method In order to address the issue of agglomeration effects in the ultrastructure of polyester fibers during industrial production, which negatively impacts product properties such as color uniformity, mechanical consistency, and gloss, a polyester fiber ultramicrostructure segmentation algorithm based on improved U-Mamba network was proposed. First, high-resolution images of agglomerated particle distributions in the ultramicrostructure of polyester fibers were acquired using the GeminiSEM 560 scanning electron microscope, and a corresponding dataset was constructed to evaluate the model's performance. A pre-trained neural network integrated with edge detection algorithms was employed to denoise, filter, and automatically colorize the fiber images. Then, an improved deep network model based on U-Mamba was adopted to accurately identify and segment agglomerates in the ultramicrostructure.

Results A polyester fiber ultrastructure dataset was established, and the proposed model was compared with five mainstream segmentation models, which are DeepLabV3, UNet, AttUNet, TransUNet, and SwinUNet. The proposed models demonstrated superior segmentation performance across 5 evaluation metrics, i.e.intersection over onion(IoU), dice similarity coefficient (DSC), accuracy (Acc), specificity (Spe), and sensitivity (Sen). Specifically, the model achieved an IoU of 78.9%, DSC of 88.2%, Acc of 96.1%, Spe of 97.4%, and Sen of 89.1%, indicating excellent capability in segmenting aggregates within the ultramicrostructure. Furthermore, ablation studies were conducted to assess the contributions of the improved high-order visual state space module and the multi-scale information fusion module to the overall segmentation performance. The results demonstrated that removing the module resulted in a 3.4% decrease in IoU, while omitting the module caused a 2.2% reduction. When both modules were removed, the IoU decreased by 4.5%, highlighting the crucial role of these modules in enhancing segmentation performance. Finally, in order to visually compare the segmentation results of different algorithms on the proposed dataset, visualization experiments were performed. The findings indicated that, relative to other models, the proposed method more accurately identifies and segments abnormal aggregates, contributing a novel approach to the application of neural networks in the segmentation of polyester fiber ultramicrostructures.

Conclusion This paper proposed an improved U-Mamba based segmentation algorithm for polyester fiber ultrastructure. Specifically tailored to the research requirements of ultrastructural analysis, a dedicated dataset of polyester fiber ultrastructure was constructed. During the image preprocessing stage, a pretrained neural network integrated with edge detection algorithms was employed to perform denoising, filtering, and auto-coloring on fiber ultrastructure images to facilitate subsequent segmentation. The key innovation lies in the design of a high-order visual state space module, which introduces higher-order operations into semantic segmentation. This module maintains the global receptive field advantages of SS2D while minimizing redundant information. Furthermore, convolutional blocks are embedded within the visual state space module, effectively combining the feature extraction capabilities of both convolutional operations and SS2D to enrich multi-level feature representations. Additionally, a multi-level multi-scale feature fusion module was designed incorporating channel attention and spatial attention mechanisms to enhance feature diversity during decoder fusion. Experimental results demonstrate that the proposed model achieves superior segmentation performance on the polyester fiber ultrastructure dataset compared to existing methods, while maintaining high segmentation accuracy. The integration of computer vision techniques for polyester fiber ultrastructure analysis represents a future trend in intelligent industrial production. This approach not only improves working conditions by replacing manual inspection of microscopic fiber defects but also enhances detection efficiency in practical manufacturing. Our algorithm successfully identifies and segments agglomerates within the ultrastructure, showing potential for applications in fiber material defect detection. The proposed method also provides insights for embedded device deployment, which will be the focus of future research.

Key words: polyester fiber, ultrastructure distribution, machine vision, U-Mamba algorithm, semantic segmentation

CLC Number: 

  • TP391.4

Fig.1

SEM images of ultrastructure of polyester fiber. (a) Distribution of inhomogeneous and relatively homogeneous regions at 4 μm scale; (b)Distribution of inhomogeneous regions and abnormally agglomerated particles at 1 μm scale"

Fig.2

Preprocessing neural network model structure"

Fig.3

Example of dataset. (a)Original image;(b) Mask image"

Fig.4

Segmentation process flow"

Fig.5

Overall network structure"

Fig.6

Multi-scale information fusion module structure"

Fig.7

High-order visual state space module structure"

Tab.1

Comparative experimental results%"

方法 评估指标
IoU DSC Acc Spe Sen
DeepLabV3[11] 67.4 81.2 92.4 93.4 82.9
Unet[6] 70.1 82.4 93.3 95.9 82.5
AttUnet[12] 72.4 84.0 94.1 96.9 81.3
TransUnet[13] 72.9 84.4 93.9 95.8 86.2
SwinUnet[14] 73.5 84.7 94.2 96.4 84.9
本文方法 78.9 88.2 96.1 97.4 89.1

Tab.2

Ablation study results%"

高阶状态
空间模块
多尺度信息
融合模块
评估指标
IoU DSC Acc Spe Sen
78.9 88.2 96.1 97.4 89.1
× 75.5 86.0 94.5 95.6 87.2
× 76.7 87.7 95.3 97.1 87.9
× × 74.4 85.3 94.2 95.5 86.5

Fig.8

Segmentation results of different models on polyester fiber ultrastructure dataset"

[1] 宋伟广, 王冬, 杜长森, 等. 原液着色聚酯纤维原位聚合用自分散纳米炭黑的制备及其性能[J]. 纺织学报, 2023, 44(4): 115-123.
SONG Weiguang, WANG Dong, DU Changsen, et al. Preparation and properties of self-dispersed nanoscale carbon black for in situ polymerization of spun-dyed polyester fiber[J]. Journal of Textile Research, 2023, 44(4): 115-123.
[2] 白恩龙, 张周强, 郭忠超, 等. 基于机器视觉的棉花颜色检测方法[J]. 纺织学报, 2024, 45(3): 36-43.
BAI Enlong, ZHANG Zhouqiang, GUO Zhongchao, et al. Cotton color detection method based on machine vision[J]. Journal of Textile Research, 2024, 45(3): 36-43.
[3] 杨金鹏, 景军锋, 李吉国, 等. 基于机器视觉的玻璃纤维合股纱缺陷检测系统设计[J]. 纺织学报, 2024, 45(5): 193-201.
YANG Jinpeng, JING Junfeng, LI Jiguo, et al. Design of defect detection system for glass fiber plied yarn based on machine vision[J]. Journal of Textile Research, 2024, 45(5): 193-201.
[4] MA J, LI F F, WANG B. U-mamba: enhancing long-range dependency for biomedical image segment-ation[EB/OL]. (2024-01-09)[2025-05-04]. https://arxiv.org/abs/2401.04722.
[5] ZHANG R, ISOLA P, EFROS A A. Colorful image colorization[M]//Computer vision-ECCV 2016. Cham: Springer International Publishing, 2016: 649-666.
[6] RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]// Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. Cham: Springer, 2015: 234-241.
[7] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]// Computer Vision - ECCV 2018. Cham: Springer, 2018: 3-19.
[8] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J/OL] // Advances in Neural Information Processing Systems, 2017, 30: 5998-6008.
[9] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
doi: 10.1109/5.726791
[10] 文嘉琪, 李新荣, 冯文倩, 等. 印花面料的边缘轮廓快速提取方法[J]. 纺织学报, 2024, 45(5): 165-173.
WEN Jiaqi, LI Xinrong, FENG Wenqian, et al. Rapid extraction of edge contours of printed fabrics[J]. Journal of Textile Research, 2024, 45(5): 165-173.
[11] CHEN L C, PAPANDREOU G, SCHROFF F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL]. (2017-06-17)[2025-05-04]. https://arxiv.org/abs/1706.05587.
[12] OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: learning where to look for the pancreas[EB/OL]. (2018-04-11)[2025-05-04]. https://arxiv.org/abs/1804.03999.
[13] CHEN J N, LU Y Y, YU Q H, et al. TransUNet: transformers make strong encoders for medical image segmentation[EB/OL]. (2021-02-18)[2025-05-04]. https://arxiv.org/abs/2102.04306.
[14] CAO H, WANG Y Y, CHEN J, et al. Swin-unet: unet-like pure transformer for medical image segmen-tation[M]//Computer vision-ECCV 2022 workshops. Cham: Springer Nature Switzerland, 2023: 205-218.
[1] TUO Wu, LIU Qiongyang, LI Qingxiang, CHEN Qian, FAN Ruige, LI Pei. Automatic size measurement of women's trousers based on machine vision and YOLO11n [J]. Journal of Textile Research, 2025, 46(12): 208-215.
[2] WU Weitao, HAN Aobo, NIU Kui, JIA Jianhui, YIN Bangxiong, XIANG Zhong. Design of fabric defect detection system based on high generalization image generation and classification algorithm [J]. Journal of Textile Research, 2025, 46(10): 227-236.
[3] REN Zhimo, ZHANG Wenchang, LI Zhenyi, YE He, YANG Chunliu, ZHANG Qian. Three-dimensional visual positioning method of textile cylindrical components via binocular structured light [J]. Journal of Textile Research, 2025, 46(07): 227-235.
[4] XU Lunyou, ZOU Kun, WU Haonan. Broken yarn detection on warp beam zone of sizing machine based on machine vision [J]. Journal of Textile Research, 2025, 46(06): 231-239.
[5] LI Jiguo, JING Junfeng, CHENG Wei, WANG Yongbo, LIU Wei. Design of machine vision-based system for detecting appearance defects in glass fiber yarn clusters [J]. Journal of Textile Research, 2025, 46(05): 243-251.
[6] WANG Rongrong, ZHOU Zhou, FENG Xiang, SHEN Ying, LIU Feng, XING Jian. Preparation and properties of porous sound absorption materials made from polyester/ethylene-propylene fibers [J]. Journal of Textile Research, 2025, 46(02): 61-68.
[7] REN Ke, ZHOU Hengshu, WEI Jinyu, YAN Wenjun, ZUO Yanwen. Dynamic aesthetic evaluation of pleated skirts based on machine vision technology [J]. Journal of Textile Research, 2024, 45(12): 189-198.
[8] REN Jiawei, ZHOU Qihong, CHEN Chang, HONG Wei, CEN Junhao. Detection method of position and posture of cheese yarn based on machine vision [J]. Journal of Textile Research, 2024, 45(11): 207-214.
[9] YANG Ruihua, SHAO Qiu, WANG Xiang. Spinning performance of recycled cotton and polyester fibers and fabric characteristics [J]. Journal of Textile Research, 2024, 45(08): 127-133.
[10] WANG Yuxi, TANG Chunxia, ZHANG Liping, FU Shaohai. Preparation of carbon black nanoparticles by Steglich esterification and its ethylene glycol dispersity [J]. Journal of Textile Research, 2024, 45(07): 104-111.
[11] CHEN Yufan, ZHENG Xiaohu, XU Xiuliang, LIU Bing. Machine vision-based defect detection method for sewing stitch traces [J]. Journal of Textile Research, 2024, 45(07): 173-180.
[12] WEN Jiaqi, LI Xinrong, FENG Wenqian, LI Hansen. Rapid extraction of edge contours of printed fabrics [J]. Journal of Textile Research, 2024, 45(05): 165-173.
[13] YANG Jinpeng, JING Junfeng, LI Jiguo, WANG Yuanbo. Design of defect detection system for glass fiber plied yarn based on machine vision [J]. Journal of Textile Research, 2024, 45(05): 193-201.
[14] BAI Enlong, ZHANG Zhouqiang, GUO Zhongchao, ZAN Jie. Cotton color detection method based on machine vision [J]. Journal of Textile Research, 2024, 45(03): 36-43.
[15] FAN Bo, WU Wei, WANG Jian, XU Hong, MAO Zhiping. Diffusion behavior of disperse dyes in supercritical CO2 fluid polyester fibers dyeing [J]. Journal of Textile Research, 2024, 45(02): 134-141.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!