Journal of Textile Research ›› 2025, Vol. 46 ›› Issue (07): 217-226.doi: 10.13475/j.fzxb.20240900601

• Machinery & Equipment • Previous Articles     Next Articles

Pose estimation and bobbin grasping based on deep learning methods

WANG Qing, JIANG Yuefu(), ZHAO Tiantian, ZHAO Shihang, LIU Jiayi   

  1. College of Mechanical and Electrical Engineering, Xi'an Polytechnic University, Xi'an, Shaanxi 710600, China
  • Received:2024-09-02 Revised:2025-03-27 Online:2025-07-15 Published:2025-08-14
  • Contact: JIANG Yuefu E-mail:13453142491@163.com

Abstract:

Objective As the textile industry transitions toward smart technology, there is an urgent need for the automation of bobbin-changing operations in winding process. Aiming to address the challenges of yarn bobbin pose estimation and gripping, deep learning algorithms is adopted to predict the pose, so as to provide key technological support for the intelligent development of the textile industry, improve production efficiency, and promote sustainable development.

Method By building a system that includes a robotic arm, camera, and other components, real and synthetic datasets were created, and online data augmentation was performed. The Swin Transformer was adopted to process the yarn bobbin color information, while KPConv was employed to extract the geometric features. After local pixel fusion, the pose was predicted, and the robotic arm was controlled to grasp the yarn bobbin based on the predicted pose.

Results Using a trained network model, the input bobbin images were processed to generate six-degrees-of-freedom pose estimation results for the bobbins. Specifically, the model could predict the poses of five different types of bobbins. Experimental results demonstrated a high level of consistency between the predicted poses and the actual poses of the bobbins. During testing, 98.7% of the predictions resulted in an average distance error of less than 10% between the estimated model points and the actual model points, indicating that the network model exhibits high accuracy and stability in the bobbin pose estimation task, effectively addressing challenges such as lighting variations and color diversity that could interfere with pose estimation. Bobbin grasping experiments were conducted on five different types of bobbins, with each set of experiments comprising 100 grasping attempts. The grasping success rate ranged from 96% to 98% across different bobbins, with an average success rate of 96.8%. Overall, the grasping success rates for the various bobbins were consistently high, with minimal differences between them. Under the experimental grasping conditions, all types of bobbins could be reliably grasped. The average response time of the pose estimation for each image frame was found in the range of 0.11 s to 0.14 s, while the average grasping response time was in the range of 2.07 s to 2.21 s. The significantly higher grasping response time is primarily attributed to the use of Python for robotic arm control, which has relatively lower execution efficiency. However, the model's efficient performance in pose estimation suggests that deploying the system on higher-performance hardware platforms would leave substantial room for overall optimization.

Conclusion The study indicates that the higher grasping response time is primarily due to the robotic arm controlled by Python, which has relatively lower execution efficiency. However, the model's high efficiency in pose estimation suggests that if deployed on a higher-performance hardware platform, the overall system performance would have significant room for optimization. Overall, this method demonstrates high accuracy and stability in the tasks of bobbin pose estimation and grasping, providing valuable insights and references for the intelligent design and practical applications in the textile industry.

Key words: deep learning, pose estimation, bobbin grasping, color feature extraction, geometric feature extraction, intelligent replacement of bobbin

CLC Number: 

  • TP391.41

Fig.1

Experimental platform"

Fig.2

Robot grasping system framework"

Fig.3

Overall network structure"

Fig.4

Swin Transformer network architecture"

Fig.5

Rigid Convolutional structure"

Fig.6

Experimental platform for creating real dataset"

Fig.7

Synthetic bobbin dataset"

Fig.8

Examples of augmentation techniques. (a)Original image; (b)Rotation augmentation; (c)Scaling augmentation; (d)Rotation and scaling augmentation"

Fig.9

Five different types of gauze tubes. (a)Steaming bobbin; (b)Cone bobbin; (c)Parallel bobbin; (d)Spinning bobbin; (e)Weft bobbin"

Fig.10

Comparison of predicted and actual poses for different types of bobbins uuder various comera angles. (a)Steaming bobbin; (b)Cone bobbin; (c)Parallel bobbin; (d)Spinning bobbin; (e)Weft bobbin"

Fig.11

Yarn tube gripping flowchart"

Tab.1

Grabbed experimental data"

目标
对象
抓取
次数
成功
次数
位姿估计
平均响应时间/s
抓取平均
响应时间/s
纱管1 100 96 0.12 2.13
纱管2 100 98 0.14 2.21
纱管3 100 97 0.11 2.11
纱管4 100 96 0.13 2.08
纱管5 100 97 0.12 2.07

Fig.12

Successful grasping of different types of bobbins. (a)Steaming bobbin; (b)Cone bobbin; (c)Parallel bobbin; (d)Spinning bobbin; (e)Weft bobbin"

[1] 高顺成. 中国纺织业对国民经济发展的贡献研究[J]. 纺织学报, 2014, 35(7):145-151.
GAO Shuncheng. Research on the contribution of China's textile industry to national economic development[J]. Journal of Textile Research, 2014, 35(7): 145-151.
[2] 陆静平. 智能化技术在纺织设备改造及纺织生产中的应用[J]. 纺织器材, 2024, 51(S1):60-62,66.
LU Jingping. Application of intelligent technology in textile equipment transformation and textile produ-ction[J]. Textile Accessories, 2024, 51(S1): 60-62,66.
[3] 佘敏楚楚, 管丽媛, 俞建勇, 等. 新质生产力驱动下纺织行业的创新发展与转型研究[J]. 东华大学学报(自然科学版), 2024, 50(5):1-11.
SHE Minchuchu, GUAN Liyuan, YU Jianyong, et al. Research on innovation development and transformation of the textile industry driven by new productivity[J]. Journal of Donghua University (Natural Science Edition), 2024, 50(5): 1-11.
[4] 郑小虎, 刘正好, 陈峰, 等. 纺织工业智能发展现状与展望[J]. 纺织学报, 2023, 44(8):205-216.
ZHENG Xiaohu, LIU Zhenghao, CHEN Feng, et al. Current status and prospects of intelligent development in the textile industry[J]. Journal of Textile Research, 2023, 44(8): 205-216.
[5] 孙磊, 屠佳佳, 毛慧敏, 等. 针织智能车间自动换筒任务调度技术[J]. 纺织学报, 2023, 44(12):189-196.
doi: 10.13475/j.fzxb.20220801001
SUN Lei, TU Jiajia, MAO Huimin, et al. Automatic bobbin changing task scheduling technology in knitting intelligent workshops[J]. Journal of Textile Research, 2023, 44(12): 189-196.
doi: 10.13475/j.fzxb.20220801001
[6] 游佳泉. 筒纱换运复合机器人作业效率提升方法研究[D]. 上海: 东华大学, 2023:1-51.
YOU Jiaquan. Research on methods to improve the operational efficiency of composite robots for bobbin changing and transporting[D]. Shanghai: Donghua University, 2023:1-51.
[7] 魏哲, 焦航. 纱筒搬运机器人的设计[J]. 机械与电子, 2020, 38(8):76-80.
WEI Zhe, JIAO Hang. Design of a bobbin handling robot[J]. Machinery & Electronics, 2020, 38(8): 76-80.
[8] 刘博, 金守峰, 宿月文, 等. 堆叠筒子纱机器人无序分拣方法[J]. 轻工机械, 2022, 40(6):14-21.
LIU Bo, JIN Shoufeng, SU Yuewen, et al. Unordered sorting method for stacked bobbin yarn by robots[J]. Light Industry Machinery, 2022, 40(6): 14-21.
[9] 张银, 佟乐. 基于改进Cascade R-CNN的织物瑕疵检测方法[J]. 上海师范大学学报(自然科学版), 2023, 52(2):231-237.
ZHANG Yin, TONG Le. Fabric defect detection method based on improved cascade R-CNN[J]. Journal of Shanghai Normal University (Natural Science Edition), 2023, 52(2): 231-237.
[10] 肖雯雯. 基于改进SURF的织物图像配准技术研究[D]. 桂林: 桂林理工大学, 2020:1-50.
XIAO Wenwen. Research on fabric image registration technology based on improved SURF[D]. Guilin: Guilin University of Technology, 2020:1-50.
[11] 胡维明. 基于Hough变换的纱线捻度测定[J]. 纺织检测与标准, 2024, 10(5):9-11.
HU Weiming. Yarn twist measurement based on Hough transform[J]. Textile Testing and Standards, 2024, 10(5): 9-11.
[12] 王雯雯, 刘基宏. 应用优化霍夫变换的细纱断头检测[J]. 纺织学报, 2018, 39(4):36-41.
WANG Wenwen, LIU Jihong. Detection of ring spinning end-breaks using optimized Hough transform[J]. Journal of Textile Research, 2018, 39(4): 36-41.
[13] 张宁, 杜金隆, 张炜. 基于机器视觉的送纱管同轴度检测研究[J]. 机械设计与制造, 2023(12):157-160.
ZHANG Ning, DU Jinlong, ZHANG Wei. Research on yarn feeding tube coaxiality detection based on machine vision[J]. Machine Design and Manufacturing, 2023 (12): 157-160.
[14] 雷志明. 基于RGB图像机械臂抓取的位姿估计研究[D]. 南宁: 广西大学, 2021:1-49.
LEI Zhiming. Research on pose estimation for robotic arm grasping based on RGB images[D]. Nanning: Guangxi University, 2021:1-49.
[15] 文代洲, 王晰, 任明俊. 基于渲染视角采样的轻量化模板匹配算法[J]. 激光与光电子学进展, 2024, 61(18):404-412.
WEN Daizhou, WANG Xi, REN Mingjun. Lightweight template matching algorithm based on rendering view sampling[J]. Progress in Laser and Optoelectronics, 2024, 61(18): 404-412.
[16] 孟建军, 陈晓彤, 李德仓, 等. 计算机视觉技术的位姿估计处理方法[J]. 计算机仿真, 2023, 40(5):274-278.
MENG Jianjun, CHEN Xiaotong, LI Decang, et al. Pose estimation processing method based on computer vision technology[J]. Computer Simulation, 2023, 40(5): 274-278.
[1] GU Mengshang, ZHANG Ning, PAN Ruru, GAO Weidong. Object detection of weaving fabric defects using frequency-domain convolution modules [J]. Journal of Textile Research, 2025, 46(05): 159-168.
[2] BAI Yuwei, XU Jian, ZHU Yaolin, DING Zhanbo, LIU Chenyu. Image detection of cotton nep in carding net based on improved YOLOv8 [J]. Journal of Textile Research, 2025, 46(03): 56-63.
[3] HUANG Xiaoyuan, HOU Jue, YANG Yang, LIU Zheng. Automatic generation of high-precision garment patterns based on improved deep learning model [J]. Journal of Textile Research, 2025, 46(02): 236-243.
[4] CAI Liling, WANG Mei, SHAO Yibing, CHEN Wei, CAO Huaqing, JI Xiaofen. Intelligent customization recommendation for traditional Hanfu based on improved stack-generative adversarial network [J]. Journal of Textile Research, 2024, 45(12): 180-188.
[5] LIU Yanping, GUO Peiyao, WU Ying. Research progress in deep learning technology for fabric defect detection [J]. Journal of Textile Research, 2024, 45(12): 234-242.
[6] LI Yang, ZHANG Yongchao, PENG Laihu, HU Xudong, YUAN Yanhong. Fabric defect detection based on improved cross-scene Beetle global search algorithm [J]. Journal of Textile Research, 2024, 45(10): 89-94.
[7] LU Yinwen, HOU Jue, YANG Yang, GU Bingfei, ZHANG Hongwei, LIU Zheng. Single dress image video synthesis based on pose embedding and multi-scale attention [J]. Journal of Textile Research, 2024, 45(07): 165-172.
[8] WEN Jiaqi, LI Xinrong, FENG Wenqian, LI Hansen. Rapid extraction of edge contours of printed fabrics [J]. Journal of Textile Research, 2024, 45(05): 165-173.
[9] LU Weijian, TU Jiajia, WANG Junru, HAN Sijie, SHI Weimin. Model for empty bobbin recognition based on improved residual network [J]. Journal of Textile Research, 2024, 45(01): 194-202.
[10] CHI Panpan, MEI Chennan, WANG Yan, XIAO Hong, ZHONG Yueqi. Single soldier camouflage small target detection based on boundary-filling [J]. Journal of Textile Research, 2024, 45(01): 112-119.
[11] YANG Hongmai, ZHANG Xiaodong, YAN Ning, ZHU Linlin, LI Na'na. Robustness algorithm for online yarn breakage detection in warp knitting machines [J]. Journal of Textile Research, 2023, 44(05): 139-146.
[12] GU Bingfei, ZHANG Jian, XU Kaiyi, ZHAO Songling, YE Fan, HOU Jue. Human contour and parameter extraction from complex background [J]. Journal of Textile Research, 2023, 44(03): 168-175.
[13] LI Yang, PENG Laihu, LI Jianqiang, LIU Jianting, ZHENG Qiuyang, HU Xudong. Fabric defect detection based on deep-belief network [J]. Journal of Textile Research, 2023, 44(02): 143-150.
[14] WANG Bin, LI Min, LEI Chenglin, HE Ruhan. Research progress in fabric defect detection based on deep learning [J]. Journal of Textile Research, 2023, 44(01): 219-227.
[15] CHEN Jia, YANG Congcong, LIU Junping, HE Ruhan, LIANG Jinxing. Cross-domain generation for transferring hand-drawn sketches to garment images [J]. Journal of Textile Research, 2023, 44(01): 171-178.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!