纺织学报 ›› 2025, Vol. 46 ›› Issue (07): 227-235.doi: 10.13475/j.fzxb.20240803201

• 机械与设备 • 上一篇    下一篇

基于双目结构光的纺织圆柱件三维视觉定位方法

任志墨1,2, 张文昌1,2, 李贞逸1,2, 叶贺3, 杨春柳1,2, 张倩1,2()   

  1. 1 北京机科国创轻量化科学研究院有限公司, 北京 100083
    2 先进成形技术与装备国家重点实验室, 北京 100083
    3 东华大学 机械工程学院, 上海 201620
  • 收稿日期:2024-08-20 修回日期:2025-03-05 出版日期:2025-07-15 发布日期:2025-08-14
  • 通讯作者: 张倩(1981—),女,正高级工程师,博士。研究方向为智能制造、数字孪生、复合机器人等。E-mail:zhangqian 82618@163.com
  • 作者简介:第一联系人:

    任志墨(1997—),男,硕士。主要研究方向为机器视觉。

  • 基金资助:
    国家自然科学基金项目(92367301)

Three-dimensional visual positioning method of textile cylindrical components via binocular structured light

REN Zhimo1,2, ZHANG Wenchang1,2, LI Zhenyi1,2, YE He3, YANG Chunliu1,2, ZHANG Qian1,2()   

  1. 1 Beijing National Innovation Institute of Lightweight Ltd., Beijing 100083, China
    2 State Key Laboratory of Advanced Forming Technology and Equipment, Beijing 100083, China
    3 College of Mechanical Engineering, Donghua University, Shanghai 201620, China
  • Received:2024-08-20 Revised:2025-03-05 Published:2025-07-15 Online:2025-08-14

摘要:

为实现纺织行业生产中圆柱件空间位置与姿态的高精度定位,提出基于双目视觉与十字结构光的三维视觉定位方法。首先通过双目视觉标定确定左右相机的内参、外参及位置关系;其次针对十字结构光的成像特征提出基于空间直线的立体匹配算法完成三维重建工作;进而利用结构光与工件边缘交点空间坐标拟合圆柱件端平面及端面圆,实现圆柱件的空间位姿定位;最后根据该方法进行三维抓取实验及误差分析。结果表明:该方法能够准确引导机器人完成退绕盘的抓取工作,对圆柱件端平面定位误差小于0.2 mm,端面圆边缘定位误差小于1.0 mm,其精度效果能够满足工业生产需求,为机器视觉定位技术在纺织行业生产中的应用提供参考。

关键词: 智能制造, 双目视觉, 三维定位, 机器视觉, 结构光, 纺织圆柱件

Abstract:

Objective In the textile industry, the majority of finished and semi-finished products contain cylindrical parts. Throughout the production process, these parts frequently necessitate transportation. Manual transportation, however, presents challenges such as increased labor intensity, reduced efficiency, and vulnerability to damage. In order to address these issues, the implementation of machine vision positioning to direct robotic grasping of cylindrical components offers a superior solution. Hence, it is essential to propose a high-precision, rapid positioning methodology specifically tailored for cylindrical parts.

Method A positioning system was designed for cylindrical components in the textile industry, leveraging binocular structured light vision. The end-face of the cylindrical component was illuminated with cross structured light to capture the image using a binocular camera. The image to pinpoint the intersection of structured light with the edge was preprocessed, then a spatial line-based stereo matching algorithm was proposed. The spatial position and normal vector of the end face were solved, utilizing the structured light edge points' imaging to fit the end face center via least squares. This method was utilized to determine the spatial position and orientation of the cylindrical component.

Result In order to validate the accuracy of the visual positioning system, a grasping experiment was conducted utilizing an unwinding disk as the experimental object. Employing the eyes-in-hand camera installation method, 1 280 pixel×960 pixel images were captured. Following multiple grasping trials, the system's detection range of within ±35 mm in position and ±10° in angle was established. The robot, guided by machine vision, successfully completed the grasping of the workpiece. The cylindrical end-face, demarcated by two spatial straight lines, exhibited a maximum positioning error of 0.194 mm, demonstrating the effectiveness of the positioning method in identifying the position and orientation of the end face. Even though the edges of cylindrical parts in the textile industry were mostly circular arcs resulting in weaker structured light characteristics and a maximum edge error of 0.956 mm for the cylindrical end-face circle, the positioning error of the center of the cylinder end-face was primarily evident in the radial direction of the end-face, with relatively small errors in the normal direction. Given the significant margin in the radial direction of the end-face design in actual robot fixture production, visual positioning methods were able to accurately guide robots to complete grasping tasks.

Conclusion The propostd spatial positioning method for textile cylindrical components, utilizes binocular dual line structured light technology. It effectively resolves visual positioning challenges and automates the loading and unloading of textile products during the production process. By actively projecting cross structured light onto the cylindrical ends, it enhances these areas with readily recognizable markers, thereby facilitating spatial positioning. A tailored spatial line-based stereo matching algorithm is devised, boosting the speed and accuracy of localization. Initially, binocular vision calibration precisely defines the internal and external parameters of the cameras, as well as their positional relationship. Subsequently, a stereo matching algorithm based on spatial straight lines is proposed, targeting the imaging characteristics of the cross structured light, so as to accomplish the three-dimensional reconstruction task. Utilizing the coordinates of the intersection points between the structured light and the workpiece edges, the methodology models the cylindrical end planes and circular edges, accurately pinpointing their spatial poses. The results demonstrate precise robotic grasping capabilities, with positioning errors of less than 0.2 mm for the end plane and less than 1.0 mm for the circular edge, which meet the demands of industrial production. It provides a reference for the application of machine vision in the textile industry production.

Key words: intelligent manufacturing, binocular vision, 3-D positioning, machine vision, structured light, textile cylindrical component

中图分类号: 

  • TS103.7

图1

双目十字结构光视觉系统"

图2

圆柱状退绕盘"

图3

双目相机与二维棋盘格标靶"

图4

双目标定后图片"

图5

端面平面定位"

图6

空间直线三维重建"

图7

空间直线与空间平面交点"

图8

2条空间直线间关系"

图9

十字激光成像模型"

图10

圆柱端平面二维坐标系"

图11

试验条件"

图12

图像处理界面"

图13

试验照片"

表1

定位结果"

序号 端面位置 端面方向
x/mm y/mm z/mm nx ny nz
1 0.118 0.003 0.122 -0.041 261 -0.033 963 0.998 570
2 -26.050 2.319 0.760 0.113 169 -0.032 807 0.993 033
3 2.841 -0.647 0.183 -0.047 683 -0.033 580 0.998 297
4 13.955 -2.066 1.027 -0.101 835 -0.032 136 0.994 282
5 25.766 -5.402 1.241 -0.172 326 -0.029 210 0.984 606
6 33.347 -8.218 1.276 -0.222 243 -0.024 784 0.974 676
7 -2.606 5.790 4.253 0.006 148 -0.058 743 0.998 254
8 -9.851 1.611 0.286 0.036 814 -0.031 182 0.998 835
9 28.330 2.307 0.627 0.071 779 -0.031 817 0.996 912
10 18.818 0.490 12.878 -0.123 985 -0.116 189 0.985 458
11 14.457 -2.223 1.321 -0.112 520 -0.034 696 0.993 043
12 25.341 -3.309 5.548 -0.186 832 -0.063 098 0.980 363
13 -1.933 4.596 12.763 -0.006 989 -0.122 126 0.992 489
14 -0.685 12.240 0.747 -0.013 640 -0.033 913 0.999 331
15 -12.356 20.662 1.531 0.050 822 -0.033 684 0.998 139
16 3.758 2.453 7.545 -0.059 502 -0.077 804 0.995 191
17 -0.717 12.258 0.677 -0.013 800 -0.031 836 0.999 397
18 9.369 1.987 5.167 -0.085 283 -0.063 943 0.994 326
19 21.148 22.087 2.674 -0.108 385 -0.029 557 0.993 669
20 11.489 7.493 1.074 -0.081 439 -0.026 786 0.996 318
21 2.748 9.017 0.718 -0.034 601 -0.030 045 0.998 949
22 -5.698 8.997 0.659 0.016 878 -0.032 599 0.999 325
23 -4.522 1.655 0.107 -0.000 952 -0.031 937 0.999 489
24 11.488 16.819 1.865 -0.065 500 -0.032 633 0.997 318
25 0.118 0.003 0.122 -0.041 261 -0.033 963 0.998 570

图14

布卷抓取实验现场"

表2

定位误差"

序号 端面误差 圆边缘误差
ep/mm e1/mm e2/mm e3/mm e4/mm
1 0.085 0.110 0.520 0.008 0.189
2 0.045 0.019 0.252 0.689 0.144
3 0.151 0.103 0.443 0.135 0.223
4 0.195 0.107 0.388 0.171 0.175
5 0.183 0.165 0.410 0.121 0.295
6 0.061 0.223 0.389 0.184 0.348
7 0.194 0.086 0.306 0.129 0.138
8 0.019 0.068 0.189 0.048 0.129
9 0.159 0.034 0.299 0.072 0.167
10 0.067 0.153 0.322 0.264 0.097
11 0.181 0.171 0.136 0.159 0.235
12 0.018 0.127 0.288 0.217 0.031
13 0.084 0.093 0.577 0.106 0.116
14 0.179 0.182 0.225 0.066 0.134
15 0.163 0.048 0.146 0.041 0.183
16 0.110 0.130 0.690 0.956 0.059
17 0.157 0.104 0.215 0.102 0.121
18 0.004 0.016 0.499 0.098 0.121
19 0.073 0.151 0.292 0.007 0.148
20 0.150 0.116 0.192 0.271 0.143
21 0.099 0.044 0.213 0.158 0.128
22 0.041 0.047 0.308 0.400 0.195
23 0.126 0.063 0.146 0.178 0.122
24 0.193 0.117 0.111 0.051 0.173
25 0.085 0.110 0.520 0.008 0.189
[1] 郑小虎, 刘正好, 陈峰, 等. 纺织工业智能发展现状与展望[J]. 纺织学报, 2023, 44(8):205-216.
ZHENG Xiaohu, LIU Zhenghao, CHEN Feng, et al. Current status and prospect of intelligent develop-ment in textile industry[J]. Journal of Textile Research, 2023, 44(8):205-216.
[2] 王文胜, 李天剑, 冉宇辰, 等. 筒子纱纱笼纱杆的定位检测方法[J]. 纺织学报, 2020, 41(3):160-167.
WANG Wensheng, LI Tianjian, RAN Yuchen, et al. Method for position detection of cheese yarn rod[J]. Journal of Textile Research, 2020, 41(3): 160-167.
[3] 史伟民, 韩思捷, 屠佳佳, 等. 基于机器视觉的空纱筒口定位方法[J]. 纺织学报, 2023, 44(11):105-112.
doi: 10.13475/j.fzxb.20220605501
SHI Weimin, HAN Sijie, TU Jiajia, et al. Empty yarn bobbin positioning method based on machine vision[J]. Journal of Textile Research, 2023, 44(11): 105-112.
doi: 10.13475/j.fzxb.20220605501
[4] 倪奕棋, 管声启, 管宇灿, 等. 基于改进的SSD深度学习算法的双目视觉纱筒识别定位[J]. 纺织高校基础科学学报, 2021, 34(2):59-66.
NI Yiqi, GUAN Shengqi, GUAN Yucan, et al. Binocular vision bobbin identification and positioning based on improved SSD deep learning algorithm[J]. Basic Sciences Journal of Textile Universities, 2021, 34(2):59-66.
[5] 毛慧敏, 屠佳佳, 孙磊, 等. 适应多类型纱筒的换筒末端执行器关键技术[J]. 纺织学报, 2024, 45(6):193-200.
MAO Huimin, TU Jiajia, SUN Lei, et al. Key technology research of bobbin change actuator suitable for multiple bobbin types[J]. Journal of Textile Research, 2024, 45(6):193-200.
[6] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334.
[7] 刘建春, 高永康, 陈勇忠. 双目视觉引导的异构件定位抓取系统[J]. 机械设计与制造, 2022(11):285-288, 295.
LIU Jianchun, GAO Yongkang, CHEN Yongzhong. Research on binocular vision guided localization and grasping system of heterogeneous parts[J]. Machinery Design & Manufacture, 2022(11):285-288,295.
[8] 杨记鑫, 胡伟霞, 赵杰. 基于双目视觉的标定和立体匹配的研究[J]. 电子设计工程, 2022, 30(13):50-53,58.
YANG Jixing, HU Weixia, ZHAO Jie. Research on calibration and stereo matching based on binocular vision[J]. Electronic Design Engineering, 2022, 30(13):50-53,58.
[9] 贺玉泉, 张勇杰, 谢光奇, 等. 基于双目线阵CCD的目标平面定位方法[J]. 激光与光电子学进展, 2022, 59(22): 96-203.
HE Yuquan, ZHANG Yongjie, XIE Guangqi, et al. Target plane positioning method based on bi-linear charge coupled device[J]. Laser & Optoelectronics Progress, 2022, 59(22):196-203.
[10] LIN C H, POWELL R A, JIANG L, et al. Real-time depth measurement for micro-holes drilled by lasers[J]. Measurement Science & Technology, 2010, 21(2):1-6.
[11] 张冰洁, 蔡来良, 王鑫, 等. 基于数量场梯度的露天矿点云台阶线自动提取算法[J]. 测绘通报, 2023(7):63-68.
doi: 10.13474/j.cnki.11-2246.2023.0202
ZHANG Bingjie, CAI Lailiang, WANG Xin, et al. Automatic extraction algorithm of step lines from point cloud in open-pit mine based on gradient of scalar field[J]. Bulletin of Surveying and Mapping, 2023(7):63-68.
doi: 10.13474/j.cnki.11-2246.2023.0202
[12] ISMAIL Taha Comlekciler, SALIH Gunes, CELAL Irgin. Artificial 3D contactless measurement in orthognathic surgery with binocular stereo vision[J]. Applied Soft Computing, 2016, 4(41): 505-514.
[13] 郑文, 林文, 韩晓东, 等. 工程项目全过程可视化数字监管技术研究[J]. 闽江学院学报, 2023, 44(2):41-52.
ZHENG Wen, LIN Wen, HAN Xiaodong, et al. Research of the visual digital supervision of the whole process of the engineering project[J]. Journal of Minjiang University, 2023, 44(2):41-52.
[14] 宋丽梅, 张继鹏, 李云鹏, 等. 基于多视角红外传感器的三维重建方法[J]. 液晶与显示, 2023, 38(6):759-769.
SONG Limei, ZHANG Jipeng, LI Yunpeng, et al. 3D reconstruction method based on multi-view infrared sensor[J]. Chinese Journal of Liquid Crystals and Displays, 2023, 38(6):759-769.
[15] 李春雷, 王亚男, 葛仁磊, 等. 非线性模型的抗差最小二乘法及其在圆心拟合中的应用[J]. 山东化工, 2023, 52(13):178-181.
LI Chunlei, WANG Yanan, GE Renlei, et al. Robust least squares method of nonlinear model and its application in circle center fitting[J]. Shandong Chemical Industry, 2023, 52(13):178-181.
[16] 罗保林, 张献州, 罗超. 融合罗德里格矩阵和整体最小二乘的双目机器人手眼标定算法[J]. 测绘科学技术学报, 2019, 36(3):244-249.
LUO Baolin, ZHANG Xianzhou, LUO Chao. Hand-eye calibration algorithm for binocular robot based on lodrigues matrix and total least squares[J]. Journal of Geomatics Science and Technology, 2019, 36(3):244-249.
[1] 许纶有, 邹鲲, 吴浩男. 基于机器视觉的浆纱机经轴区断纱故障检测[J]. 纺织学报, 2025, 46(06): 231-239.
[2] 李吉国, 景军锋, 程为, 王永波, 刘薇. 基于机器视觉的玻璃纤维纱团外观缺陷检测系统设计[J]. 纺织学报, 2025, 46(05): 243-251.
[3] 任柯, 周衡书, 魏瑾瑜, 闫文君, 左言文. 基于机器视觉技术的百褶裙动态美感评价[J]. 纺织学报, 2024, 45(12): 189-198.
[4] 任佳伟, 周其洪, 陈唱, 洪巍, 岑均豪. 基于机器视觉的交叉缠绕式筒子纱位姿检测方法[J]. 纺织学报, 2024, 45(11): 207-214.
[5] 陈育帆, 郑小虎, 徐修亮, 刘冰. 基于机器视觉的缝纫线迹缺陷检测方法[J]. 纺织学报, 2024, 45(07): 173-180.
[6] 王建萍, 沈津竹, 姚晓凤, 朱妍西, 张帆. 服装裁片自动抓取技术及其布局方法的研究进展[J]. 纺织学报, 2024, 45(06): 227-234.
[7] 杨金鹏, 景军锋, 李吉国, 王渊博. 基于机器视觉的玻璃纤维合股纱缺陷检测系统设计[J]. 纺织学报, 2024, 45(05): 193-201.
[8] 文嘉琪, 李新荣, 冯文倩, 李瀚森. 印花面料的边缘轮廓快速提取方法[J]. 纺织学报, 2024, 45(05): 165-173.
[9] 白恩龙, 张周强, 郭忠超, 昝杰. 基于机器视觉的棉花颜色检测方法[J]. 纺织学报, 2024, 45(03): 36-43.
[10] 葛苏敏, 林瑞冰, 徐平华, 吴思熠, 罗芊芊. 基于机器视觉的曲面枕个性化定制方法[J]. 纺织学报, 2024, 45(02): 214-220.
[11] 许高平, 孙以泽. 移动机械臂牵引卷装纱线的动态建模与控制[J]. 纺织学报, 2024, 45(01): 1-11.
[12] 史伟民, 韩思捷, 屠佳佳, 陆伟健, 段玉堂. 基于机器视觉的空纱筒口定位方法[J]. 纺织学报, 2023, 44(11): 105-112.
[13] 金守峰, 沈文军, 肖福礼, 李毅. 基于线结构光的钢领内表面圆度测量方法[J]. 纺织学报, 2023, 44(10): 164-171.
[14] 李新荣, 韩鹏辉, 李瑞芬, 贾坤, 路元江, 康雪峰. 数字孪生在纺纱领域应用的关键技术解析[J]. 纺织学报, 2023, 44(10): 214-222.
[15] 陈泰芳, 周亚勤, 汪俊亮, 徐楚桥, 李冬武. 基于视觉特征强化的环锭纺细纱断头在线检测方法[J]. 纺织学报, 2023, 44(08): 63-72.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!