纺织学报 ›› 2025, Vol. 46 ›› Issue (07): 217-226.doi: 10.13475/j.fzxb.20240900601

• 机械与设备 • 上一篇    下一篇

基于深度学习的纱管位姿估计方法及抓取实验

王青, 姜越夫(), 赵恬恬, 赵世航, 刘甲怡   

  1. 西安工程大学 机电工程学院, 陕西 西安 710600
  • 收稿日期:2024-09-02 修回日期:2025-03-27 出版日期:2025-07-15 发布日期:2025-08-14
  • 通讯作者: 姜越夫(1998—),男,硕士生。主要研究方向为位姿估计算法研究。E-mail:13453142491@163.com
  • 作者简介:王青(1985—),女,讲师,博士。主要研究方向为基于机器视觉的目标识别和位姿估计算法研究。
  • 基金资助:
    国家自然科学基金青年科学基金项目(52105584)

Pose estimation and bobbin grasping based on deep learning methods

WANG Qing, JIANG Yuefu(), ZHAO Tiantian, ZHAO Shihang, LIU Jiayi   

  1. College of Mechanical and Electrical Engineering, Xi'an Polytechnic University, Xi'an, Shaanxi 710600, China
  • Received:2024-09-02 Revised:2025-03-27 Published:2025-07-15 Online:2025-08-14

摘要:

为实现纺织行业中络筒和针织工艺中的智能化换管操作与多模态特征的高效融合,通过异构网络设计,使用Swin Transformer网络提取纱管颜色特征信息,利用KPConv网络提取纱管点云的几何特征信息;为减少离群点特征造成的干扰,设计了一种局部逐像素特征对齐与融合策略;考虑到网络训练需要大量数据,且复杂应用环境中可能存在光照变化和纱管颜色多样等挑战,在数据集制作阶段生成真实与合成2类数据集,并在网络训练过程中引入在线数据增强技术,以提升数据集的复杂性和多样性;最后,以制作的数据集为对象,进行网络训练和测试,并在抓取系统中进行了纱管的抓取实验,验证了本文改进算法的有效性。结果表明:所提出的网络模型对纱管的识别准确率达98.7%,每帧图片的位姿估计平均响应时间范围为0.11~0.14 s。同时,基于该模型构建的抓取系统实现了96.8%的纱管抓取成功率,抓取任务的平均响应时间集中在2.07~2.21 s之间。研究证明,该方法在纱管位姿估计和抓取任务中具有较高的精度和稳定性,可为纺织行业智能化设计与应用提供借鉴。

关键词: 深度学习, 位姿估计, 纱管抓取, 颜色特征提取, 几何特征提取, 智能换管

Abstract:

Objective As the textile industry transitions toward smart technology, there is an urgent need for the automation of bobbin-changing operations in winding process. Aiming to address the challenges of yarn bobbin pose estimation and gripping, deep learning algorithms is adopted to predict the pose, so as to provide key technological support for the intelligent development of the textile industry, improve production efficiency, and promote sustainable development.

Method By building a system that includes a robotic arm, camera, and other components, real and synthetic datasets were created, and online data augmentation was performed. The Swin Transformer was adopted to process the yarn bobbin color information, while KPConv was employed to extract the geometric features. After local pixel fusion, the pose was predicted, and the robotic arm was controlled to grasp the yarn bobbin based on the predicted pose.

Results Using a trained network model, the input bobbin images were processed to generate six-degrees-of-freedom pose estimation results for the bobbins. Specifically, the model could predict the poses of five different types of bobbins. Experimental results demonstrated a high level of consistency between the predicted poses and the actual poses of the bobbins. During testing, 98.7% of the predictions resulted in an average distance error of less than 10% between the estimated model points and the actual model points, indicating that the network model exhibits high accuracy and stability in the bobbin pose estimation task, effectively addressing challenges such as lighting variations and color diversity that could interfere with pose estimation. Bobbin grasping experiments were conducted on five different types of bobbins, with each set of experiments comprising 100 grasping attempts. The grasping success rate ranged from 96% to 98% across different bobbins, with an average success rate of 96.8%. Overall, the grasping success rates for the various bobbins were consistently high, with minimal differences between them. Under the experimental grasping conditions, all types of bobbins could be reliably grasped. The average response time of the pose estimation for each image frame was found in the range of 0.11 s to 0.14 s, while the average grasping response time was in the range of 2.07 s to 2.21 s. The significantly higher grasping response time is primarily attributed to the use of Python for robotic arm control, which has relatively lower execution efficiency. However, the model's efficient performance in pose estimation suggests that deploying the system on higher-performance hardware platforms would leave substantial room for overall optimization.

Conclusion The study indicates that the higher grasping response time is primarily due to the robotic arm controlled by Python, which has relatively lower execution efficiency. However, the model's high efficiency in pose estimation suggests that if deployed on a higher-performance hardware platform, the overall system performance would have significant room for optimization. Overall, this method demonstrates high accuracy and stability in the tasks of bobbin pose estimation and grasping, providing valuable insights and references for the intelligent design and practical applications in the textile industry.

Key words: deep learning, pose estimation, bobbin grasping, color feature extraction, geometric feature extraction, intelligent replacement of bobbin

中图分类号: 

  • TP391.41

图1

实验平台"

图2

机器人抓取系统框架"

图3

网络整体结构"

图4

Swin Transformer网络结构"

图5

刚性卷积结构"

图6

制作真实数据集的实验平台"

图7

合成纱管数据集"

图8

增强技术的示例"

图9

5种不同类型的纱管"

图10

针对不同类型纱管不同相摄角度的预测位姿与实际位姿对比"

图11

纱管抓取流程图"

表1

抓取实验数据表"

目标
对象
抓取
次数
成功
次数
位姿估计
平均响应时间/s
抓取平均
响应时间/s
纱管1 100 96 0.12 2.13
纱管2 100 98 0.14 2.21
纱管3 100 97 0.11 2.11
纱管4 100 96 0.13 2.08
纱管5 100 97 0.12 2.07

图12

成功抓取不同类型的纱管"

[1] 高顺成. 中国纺织业对国民经济发展的贡献研究[J]. 纺织学报, 2014, 35(7):145-151.
GAO Shuncheng. Research on the contribution of China's textile industry to national economic development[J]. Journal of Textile Research, 2014, 35(7): 145-151.
[2] 陆静平. 智能化技术在纺织设备改造及纺织生产中的应用[J]. 纺织器材, 2024, 51(S1):60-62,66.
LU Jingping. Application of intelligent technology in textile equipment transformation and textile produ-ction[J]. Textile Accessories, 2024, 51(S1): 60-62,66.
[3] 佘敏楚楚, 管丽媛, 俞建勇, 等. 新质生产力驱动下纺织行业的创新发展与转型研究[J]. 东华大学学报(自然科学版), 2024, 50(5):1-11.
SHE Minchuchu, GUAN Liyuan, YU Jianyong, et al. Research on innovation development and transformation of the textile industry driven by new productivity[J]. Journal of Donghua University (Natural Science Edition), 2024, 50(5): 1-11.
[4] 郑小虎, 刘正好, 陈峰, 等. 纺织工业智能发展现状与展望[J]. 纺织学报, 2023, 44(8):205-216.
ZHENG Xiaohu, LIU Zhenghao, CHEN Feng, et al. Current status and prospects of intelligent development in the textile industry[J]. Journal of Textile Research, 2023, 44(8): 205-216.
[5] 孙磊, 屠佳佳, 毛慧敏, 等. 针织智能车间自动换筒任务调度技术[J]. 纺织学报, 2023, 44(12):189-196.
doi: 10.13475/j.fzxb.20220801001
SUN Lei, TU Jiajia, MAO Huimin, et al. Automatic bobbin changing task scheduling technology in knitting intelligent workshops[J]. Journal of Textile Research, 2023, 44(12): 189-196.
doi: 10.13475/j.fzxb.20220801001
[6] 游佳泉. 筒纱换运复合机器人作业效率提升方法研究[D]. 上海: 东华大学, 2023:1-51.
YOU Jiaquan. Research on methods to improve the operational efficiency of composite robots for bobbin changing and transporting[D]. Shanghai: Donghua University, 2023:1-51.
[7] 魏哲, 焦航. 纱筒搬运机器人的设计[J]. 机械与电子, 2020, 38(8):76-80.
WEI Zhe, JIAO Hang. Design of a bobbin handling robot[J]. Machinery & Electronics, 2020, 38(8): 76-80.
[8] 刘博, 金守峰, 宿月文, 等. 堆叠筒子纱机器人无序分拣方法[J]. 轻工机械, 2022, 40(6):14-21.
LIU Bo, JIN Shoufeng, SU Yuewen, et al. Unordered sorting method for stacked bobbin yarn by robots[J]. Light Industry Machinery, 2022, 40(6): 14-21.
[9] 张银, 佟乐. 基于改进Cascade R-CNN的织物瑕疵检测方法[J]. 上海师范大学学报(自然科学版), 2023, 52(2):231-237.
ZHANG Yin, TONG Le. Fabric defect detection method based on improved cascade R-CNN[J]. Journal of Shanghai Normal University (Natural Science Edition), 2023, 52(2): 231-237.
[10] 肖雯雯. 基于改进SURF的织物图像配准技术研究[D]. 桂林: 桂林理工大学, 2020:1-50.
XIAO Wenwen. Research on fabric image registration technology based on improved SURF[D]. Guilin: Guilin University of Technology, 2020:1-50.
[11] 胡维明. 基于Hough变换的纱线捻度测定[J]. 纺织检测与标准, 2024, 10(5):9-11.
HU Weiming. Yarn twist measurement based on Hough transform[J]. Textile Testing and Standards, 2024, 10(5): 9-11.
[12] 王雯雯, 刘基宏. 应用优化霍夫变换的细纱断头检测[J]. 纺织学报, 2018, 39(4):36-41.
WANG Wenwen, LIU Jihong. Detection of ring spinning end-breaks using optimized Hough transform[J]. Journal of Textile Research, 2018, 39(4): 36-41.
[13] 张宁, 杜金隆, 张炜. 基于机器视觉的送纱管同轴度检测研究[J]. 机械设计与制造, 2023(12):157-160.
ZHANG Ning, DU Jinlong, ZHANG Wei. Research on yarn feeding tube coaxiality detection based on machine vision[J]. Machine Design and Manufacturing, 2023 (12): 157-160.
[14] 雷志明. 基于RGB图像机械臂抓取的位姿估计研究[D]. 南宁: 广西大学, 2021:1-49.
LEI Zhiming. Research on pose estimation for robotic arm grasping based on RGB images[D]. Nanning: Guangxi University, 2021:1-49.
[15] 文代洲, 王晰, 任明俊. 基于渲染视角采样的轻量化模板匹配算法[J]. 激光与光电子学进展, 2024, 61(18):404-412.
WEN Daizhou, WANG Xi, REN Mingjun. Lightweight template matching algorithm based on rendering view sampling[J]. Progress in Laser and Optoelectronics, 2024, 61(18): 404-412.
[16] 孟建军, 陈晓彤, 李德仓, 等. 计算机视觉技术的位姿估计处理方法[J]. 计算机仿真, 2023, 40(5):274-278.
MENG Jianjun, CHEN Xiaotong, LI Decang, et al. Pose estimation processing method based on computer vision technology[J]. Computer Simulation, 2023, 40(5): 274-278.
[1] 顾孟尚, 张宁, 潘如如, 高卫东. 结合频域卷积模块的机织物图像疵点目标检测[J]. 纺织学报, 2025, 46(05): 159-168.
[2] 白雨薇, 徐健, 朱耀麟, 丁展博, 刘晨雨. 基于改进YOLOv8的梳棉机棉网上棉结检测方法[J]. 纺织学报, 2025, 46(03): 56-63.
[3] 黄小源, 侯珏, 杨阳, 刘正. 基于改进深度学习模型的高精度服装样板自动生成[J]. 纺织学报, 2025, 46(02): 236-243.
[4] 刘燕萍, 郭佩瑶, 吴莹. 面向织物疵点检测的深度学习技术应用研究进展[J]. 纺织学报, 2024, 45(12): 234-242.
[5] 蔡丽玲, 王梅, 邵一兵, 陈炜, 曹华卿, 季晓芬. 基于改进堆叠生成对抗网络的传统汉服智能定制推荐[J]. 纺织学报, 2024, 45(12): 180-188.
[6] 李杨, 张永超, 彭来湖, 胡旭东, 袁嫣红. 基于改进甲壳虫全域搜索算法的机织物疵点检测[J]. 纺织学报, 2024, 45(10): 89-94.
[7] 陆寅雯, 侯珏, 杨阳, 顾冰菲, 张宏伟, 刘正. 基于姿态嵌入机制和多尺度注意力的单张着装图像视频合成[J]. 纺织学报, 2024, 45(07): 165-172.
[8] 裘柯槟, 陈维国, 张志强, 黄为忠. 基于二维高斯核密度估计的有色纤维颜色特征提取方法[J]. 纺织学报, 2024, 45(05): 85-93.
[9] 文嘉琪, 李新荣, 冯文倩, 李瀚森. 印花面料的边缘轮廓快速提取方法[J]. 纺织学报, 2024, 45(05): 165-173.
[10] 陆伟健, 屠佳佳, 王俊茹, 韩思捷, 史伟民. 基于改进残差网络的空纱筒识别模型[J]. 纺织学报, 2024, 45(01): 194-202.
[11] 池盼盼, 梅琛楠, 王焰, 肖红, 钟跃崎. 基于边缘填充的单兵迷彩伪装小目标检测[J]. 纺织学报, 2024, 45(01): 112-119.
[12] 杨宏脉, 张效栋, 闫宁, 朱琳琳, 李娜娜. 一种高鲁棒性经编机上断纱在线检测算法[J]. 纺织学报, 2023, 44(05): 139-146.
[13] 顾冰菲, 张健, 徐凯忆, 赵崧灵, 叶凡, 侯珏. 复杂背景下人体轮廓及其参数提取[J]. 纺织学报, 2023, 44(03): 168-175.
[14] 李杨, 彭来湖, 李建强, 刘建廷, 郑秋扬, 胡旭东. 基于深度信念网络的织物疵点检测[J]. 纺织学报, 2023, 44(02): 143-150.
[15] 王斌, 李敏, 雷承霖, 何儒汉. 基于深度学习的织物疵点检测研究进展[J]. 纺织学报, 2023, 44(01): 219-227.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!