Journal of Textile Research ›› 2025, Vol. 46 ›› Issue (07): 227-235.doi: 10.13475/j.fzxb.20240803201

• Machinery & Equipment • Previous Articles     Next Articles

Three-dimensional visual positioning method of textile cylindrical components via binocular structured light

REN Zhimo1,2, ZHANG Wenchang1,2, LI Zhenyi1,2, YE He3, YANG Chunliu1,2, ZHANG Qian1,2()   

  1. 1 Beijing National Innovation Institute of Lightweight Ltd., Beijing 100083, China
    2 State Key Laboratory of Advanced Forming Technology and Equipment, Beijing 100083, China
    3 College of Mechanical Engineering, Donghua University, Shanghai 201620, China
  • Received:2024-08-20 Revised:2025-03-05 Online:2025-07-15 Published:2025-08-14
  • Contact: ZHANG Qian E-mail:82618@163.com

Abstract:

Objective In the textile industry, the majority of finished and semi-finished products contain cylindrical parts. Throughout the production process, these parts frequently necessitate transportation. Manual transportation, however, presents challenges such as increased labor intensity, reduced efficiency, and vulnerability to damage. In order to address these issues, the implementation of machine vision positioning to direct robotic grasping of cylindrical components offers a superior solution. Hence, it is essential to propose a high-precision, rapid positioning methodology specifically tailored for cylindrical parts.

Method A positioning system was designed for cylindrical components in the textile industry, leveraging binocular structured light vision. The end-face of the cylindrical component was illuminated with cross structured light to capture the image using a binocular camera. The image to pinpoint the intersection of structured light with the edge was preprocessed, then a spatial line-based stereo matching algorithm was proposed. The spatial position and normal vector of the end face were solved, utilizing the structured light edge points' imaging to fit the end face center via least squares. This method was utilized to determine the spatial position and orientation of the cylindrical component.

Result In order to validate the accuracy of the visual positioning system, a grasping experiment was conducted utilizing an unwinding disk as the experimental object. Employing the eyes-in-hand camera installation method, 1 280 pixel×960 pixel images were captured. Following multiple grasping trials, the system's detection range of within ±35 mm in position and ±10° in angle was established. The robot, guided by machine vision, successfully completed the grasping of the workpiece. The cylindrical end-face, demarcated by two spatial straight lines, exhibited a maximum positioning error of 0.194 mm, demonstrating the effectiveness of the positioning method in identifying the position and orientation of the end face. Even though the edges of cylindrical parts in the textile industry were mostly circular arcs resulting in weaker structured light characteristics and a maximum edge error of 0.956 mm for the cylindrical end-face circle, the positioning error of the center of the cylinder end-face was primarily evident in the radial direction of the end-face, with relatively small errors in the normal direction. Given the significant margin in the radial direction of the end-face design in actual robot fixture production, visual positioning methods were able to accurately guide robots to complete grasping tasks.

Conclusion The propostd spatial positioning method for textile cylindrical components, utilizes binocular dual line structured light technology. It effectively resolves visual positioning challenges and automates the loading and unloading of textile products during the production process. By actively projecting cross structured light onto the cylindrical ends, it enhances these areas with readily recognizable markers, thereby facilitating spatial positioning. A tailored spatial line-based stereo matching algorithm is devised, boosting the speed and accuracy of localization. Initially, binocular vision calibration precisely defines the internal and external parameters of the cameras, as well as their positional relationship. Subsequently, a stereo matching algorithm based on spatial straight lines is proposed, targeting the imaging characteristics of the cross structured light, so as to accomplish the three-dimensional reconstruction task. Utilizing the coordinates of the intersection points between the structured light and the workpiece edges, the methodology models the cylindrical end planes and circular edges, accurately pinpointing their spatial poses. The results demonstrate precise robotic grasping capabilities, with positioning errors of less than 0.2 mm for the end plane and less than 1.0 mm for the circular edge, which meet the demands of industrial production. It provides a reference for the application of machine vision in the textile industry production.

Key words: intelligent manufacturing, binocular vision, 3-D positioning, machine vision, structured light, textile cylindrical component

CLC Number: 

  • TS103.7

Fig.1

Visual system of binocular cross structured light"

Fig.2

Cylindrical barrel unwinding coil"

Fig.3

Binocular camera (a) and checkerboard target (b)"

Fig.4

Image after dual-target calibration"

Fig.5

Plane positioning of end face"

Fig.6

Spatial straight line stereo matching"

Fig.7

Intersection of spatial line and space plane"

Fig.8

Relationship between two spatial lines"

Fig.9

Cross laser imaging model"

Fig.10

Planar reference frame of cylinder end plane"

Fig.11

Test conditions"

Fig.12

Image processing interface"

Fig.13

Test photo"

Tab.1

Positioning results"

序号 端面位置 端面方向
x/mm y/mm z/mm nx ny nz
1 0.118 0.003 0.122 -0.041 261 -0.033 963 0.998 570
2 -26.050 2.319 0.760 0.113 169 -0.032 807 0.993 033
3 2.841 -0.647 0.183 -0.047 683 -0.033 580 0.998 297
4 13.955 -2.066 1.027 -0.101 835 -0.032 136 0.994 282
5 25.766 -5.402 1.241 -0.172 326 -0.029 210 0.984 606
6 33.347 -8.218 1.276 -0.222 243 -0.024 784 0.974 676
7 -2.606 5.790 4.253 0.006 148 -0.058 743 0.998 254
8 -9.851 1.611 0.286 0.036 814 -0.031 182 0.998 835
9 28.330 2.307 0.627 0.071 779 -0.031 817 0.996 912
10 18.818 0.490 12.878 -0.123 985 -0.116 189 0.985 458
11 14.457 -2.223 1.321 -0.112 520 -0.034 696 0.993 043
12 25.341 -3.309 5.548 -0.186 832 -0.063 098 0.980 363
13 -1.933 4.596 12.763 -0.006 989 -0.122 126 0.992 489
14 -0.685 12.240 0.747 -0.013 640 -0.033 913 0.999 331
15 -12.356 20.662 1.531 0.050 822 -0.033 684 0.998 139
16 3.758 2.453 7.545 -0.059 502 -0.077 804 0.995 191
17 -0.717 12.258 0.677 -0.013 800 -0.031 836 0.999 397
18 9.369 1.987 5.167 -0.085 283 -0.063 943 0.994 326
19 21.148 22.087 2.674 -0.108 385 -0.029 557 0.993 669
20 11.489 7.493 1.074 -0.081 439 -0.026 786 0.996 318
21 2.748 9.017 0.718 -0.034 601 -0.030 045 0.998 949
22 -5.698 8.997 0.659 0.016 878 -0.032 599 0.999 325
23 -4.522 1.655 0.107 -0.000 952 -0.031 937 0.999 489
24 11.488 16.819 1.865 -0.065 500 -0.032 633 0.997 318
25 0.118 0.003 0.122 -0.041 261 -0.033 963 0.998 570

Fig.14

Fabric roll grasping experiment site"

Tab.2

Positioning error"

序号 端面误差 圆边缘误差
ep/mm e1/mm e2/mm e3/mm e4/mm
1 0.085 0.110 0.520 0.008 0.189
2 0.045 0.019 0.252 0.689 0.144
3 0.151 0.103 0.443 0.135 0.223
4 0.195 0.107 0.388 0.171 0.175
5 0.183 0.165 0.410 0.121 0.295
6 0.061 0.223 0.389 0.184 0.348
7 0.194 0.086 0.306 0.129 0.138
8 0.019 0.068 0.189 0.048 0.129
9 0.159 0.034 0.299 0.072 0.167
10 0.067 0.153 0.322 0.264 0.097
11 0.181 0.171 0.136 0.159 0.235
12 0.018 0.127 0.288 0.217 0.031
13 0.084 0.093 0.577 0.106 0.116
14 0.179 0.182 0.225 0.066 0.134
15 0.163 0.048 0.146 0.041 0.183
16 0.110 0.130 0.690 0.956 0.059
17 0.157 0.104 0.215 0.102 0.121
18 0.004 0.016 0.499 0.098 0.121
19 0.073 0.151 0.292 0.007 0.148
20 0.150 0.116 0.192 0.271 0.143
21 0.099 0.044 0.213 0.158 0.128
22 0.041 0.047 0.308 0.400 0.195
23 0.126 0.063 0.146 0.178 0.122
24 0.193 0.117 0.111 0.051 0.173
25 0.085 0.110 0.520 0.008 0.189
[1] 郑小虎, 刘正好, 陈峰, 等. 纺织工业智能发展现状与展望[J]. 纺织学报, 2023, 44(8):205-216.
ZHENG Xiaohu, LIU Zhenghao, CHEN Feng, et al. Current status and prospect of intelligent develop-ment in textile industry[J]. Journal of Textile Research, 2023, 44(8):205-216.
[2] 王文胜, 李天剑, 冉宇辰, 等. 筒子纱纱笼纱杆的定位检测方法[J]. 纺织学报, 2020, 41(3):160-167.
WANG Wensheng, LI Tianjian, RAN Yuchen, et al. Method for position detection of cheese yarn rod[J]. Journal of Textile Research, 2020, 41(3): 160-167.
[3] 史伟民, 韩思捷, 屠佳佳, 等. 基于机器视觉的空纱筒口定位方法[J]. 纺织学报, 2023, 44(11):105-112.
doi: 10.13475/j.fzxb.20220605501
SHI Weimin, HAN Sijie, TU Jiajia, et al. Empty yarn bobbin positioning method based on machine vision[J]. Journal of Textile Research, 2023, 44(11): 105-112.
doi: 10.13475/j.fzxb.20220605501
[4] 倪奕棋, 管声启, 管宇灿, 等. 基于改进的SSD深度学习算法的双目视觉纱筒识别定位[J]. 纺织高校基础科学学报, 2021, 34(2):59-66.
NI Yiqi, GUAN Shengqi, GUAN Yucan, et al. Binocular vision bobbin identification and positioning based on improved SSD deep learning algorithm[J]. Basic Sciences Journal of Textile Universities, 2021, 34(2):59-66.
[5] 毛慧敏, 屠佳佳, 孙磊, 等. 适应多类型纱筒的换筒末端执行器关键技术[J]. 纺织学报, 2024, 45(6):193-200.
MAO Huimin, TU Jiajia, SUN Lei, et al. Key technology research of bobbin change actuator suitable for multiple bobbin types[J]. Journal of Textile Research, 2024, 45(6):193-200.
[6] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334.
[7] 刘建春, 高永康, 陈勇忠. 双目视觉引导的异构件定位抓取系统[J]. 机械设计与制造, 2022(11):285-288, 295.
LIU Jianchun, GAO Yongkang, CHEN Yongzhong. Research on binocular vision guided localization and grasping system of heterogeneous parts[J]. Machinery Design & Manufacture, 2022(11):285-288,295.
[8] 杨记鑫, 胡伟霞, 赵杰. 基于双目视觉的标定和立体匹配的研究[J]. 电子设计工程, 2022, 30(13):50-53,58.
YANG Jixing, HU Weixia, ZHAO Jie. Research on calibration and stereo matching based on binocular vision[J]. Electronic Design Engineering, 2022, 30(13):50-53,58.
[9] 贺玉泉, 张勇杰, 谢光奇, 等. 基于双目线阵CCD的目标平面定位方法[J]. 激光与光电子学进展, 2022, 59(22): 96-203.
HE Yuquan, ZHANG Yongjie, XIE Guangqi, et al. Target plane positioning method based on bi-linear charge coupled device[J]. Laser & Optoelectronics Progress, 2022, 59(22):196-203.
[10] LIN C H, POWELL R A, JIANG L, et al. Real-time depth measurement for micro-holes drilled by lasers[J]. Measurement Science & Technology, 2010, 21(2):1-6.
[11] 张冰洁, 蔡来良, 王鑫, 等. 基于数量场梯度的露天矿点云台阶线自动提取算法[J]. 测绘通报, 2023(7):63-68.
doi: 10.13474/j.cnki.11-2246.2023.0202
ZHANG Bingjie, CAI Lailiang, WANG Xin, et al. Automatic extraction algorithm of step lines from point cloud in open-pit mine based on gradient of scalar field[J]. Bulletin of Surveying and Mapping, 2023(7):63-68.
doi: 10.13474/j.cnki.11-2246.2023.0202
[12] ISMAIL Taha Comlekciler, SALIH Gunes, CELAL Irgin. Artificial 3D contactless measurement in orthognathic surgery with binocular stereo vision[J]. Applied Soft Computing, 2016, 4(41): 505-514.
[13] 郑文, 林文, 韩晓东, 等. 工程项目全过程可视化数字监管技术研究[J]. 闽江学院学报, 2023, 44(2):41-52.
ZHENG Wen, LIN Wen, HAN Xiaodong, et al. Research of the visual digital supervision of the whole process of the engineering project[J]. Journal of Minjiang University, 2023, 44(2):41-52.
[14] 宋丽梅, 张继鹏, 李云鹏, 等. 基于多视角红外传感器的三维重建方法[J]. 液晶与显示, 2023, 38(6):759-769.
SONG Limei, ZHANG Jipeng, LI Yunpeng, et al. 3D reconstruction method based on multi-view infrared sensor[J]. Chinese Journal of Liquid Crystals and Displays, 2023, 38(6):759-769.
[15] 李春雷, 王亚男, 葛仁磊, 等. 非线性模型的抗差最小二乘法及其在圆心拟合中的应用[J]. 山东化工, 2023, 52(13):178-181.
LI Chunlei, WANG Yanan, GE Renlei, et al. Robust least squares method of nonlinear model and its application in circle center fitting[J]. Shandong Chemical Industry, 2023, 52(13):178-181.
[16] 罗保林, 张献州, 罗超. 融合罗德里格矩阵和整体最小二乘的双目机器人手眼标定算法[J]. 测绘科学技术学报, 2019, 36(3):244-249.
LUO Baolin, ZHANG Xianzhou, LUO Chao. Hand-eye calibration algorithm for binocular robot based on lodrigues matrix and total least squares[J]. Journal of Geomatics Science and Technology, 2019, 36(3):244-249.
[1] XU Lunyou, ZOU Kun, WU Haonan. Broken yarn detection on warp beam zone of sizing machine based on machine vision [J]. Journal of Textile Research, 2025, 46(06): 231-239.
[2] LI Jiguo, JING Junfeng, CHENG Wei, WANG Yongbo, LIU Wei. Design of machine vision-based system for detecting appearance defects in glass fiber yarn clusters [J]. Journal of Textile Research, 2025, 46(05): 243-251.
[3] WANG Jianping, WENG Yuxin, SHEN Jinzhu, ZHANG Fan, LIU Xianke. Automatic fabric flipping device based on soft fingers and its application effect [J]. Journal of Textile Research, 2025, 46(01): 197-205.
[4] REN Ke, ZHOU Hengshu, WEI Jinyu, YAN Wenjun, ZUO Yanwen. Dynamic aesthetic evaluation of pleated skirts based on machine vision technology [J]. Journal of Textile Research, 2024, 45(12): 189-198.
[5] REN Jiawei, ZHOU Qihong, CHEN Chang, HONG Wei, CEN Junhao. Detection method of position and posture of cheese yarn based on machine vision [J]. Journal of Textile Research, 2024, 45(11): 207-214.
[6] CHEN Yufan, ZHENG Xiaohu, XU Xiuliang, LIU Bing. Machine vision-based defect detection method for sewing stitch traces [J]. Journal of Textile Research, 2024, 45(07): 173-180.
[7] WANG Jianping, SHEN Jinzhu, YAO Xiaofeng, ZHU Yanxi, ZHANG Fan. Review on automatic grasping technology and arrangement methods for garment pattern pieces [J]. Journal of Textile Research, 2024, 45(06): 227-234.
[8] WEN Jiaqi, LI Xinrong, FENG Wenqian, LI Hansen. Rapid extraction of edge contours of printed fabrics [J]. Journal of Textile Research, 2024, 45(05): 165-173.
[9] YANG Jinpeng, JING Junfeng, LI Jiguo, WANG Yuanbo. Design of defect detection system for glass fiber plied yarn based on machine vision [J]. Journal of Textile Research, 2024, 45(05): 193-201.
[10] BAI Enlong, ZHANG Zhouqiang, GUO Zhongchao, ZAN Jie. Cotton color detection method based on machine vision [J]. Journal of Textile Research, 2024, 45(03): 36-43.
[11] GE Sumin, LIN Ruibing, XU Pinghua, WU Siyi, LUO Qianqian. Personalized customization of curved surface pillows based on machine vision [J]. Journal of Textile Research, 2024, 45(02): 214-220.
[12] XU Gaoping, SUN Yize. Dynamic modeling and control of package yarn pulled by mobile manipulator [J]. Journal of Textile Research, 2024, 45(01): 1-11.
[13] SHI Weimin, HAN Sijie, TU Jiajia, LU Weijian, DUAN Yutang. Empty yarn bobbin positioning method based on machine vision [J]. Journal of Textile Research, 2023, 44(11): 105-112.
[14] JIN Shoufeng, SHEN Wenjun, XIAO Fuli, LI Yi. Measurement method of steel ring roundness based on line structured light [J]. Journal of Textile Research, 2023, 44(10): 164-171.
[15] LI Xinrong, HAN Penghui, LI Ruifen, JIA Kun, LU Yuanjiang, KANG Xuefeng. Review and analysis on key technology of digital twin in spinning field [J]. Journal of Textile Research, 2023, 44(10): 214-222.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!