纺织学报 ›› 2025, Vol. 46 ›› Issue (12): 208-215.doi: 10.13475/j.fzxb.20250705901

• 服装工程 • 上一篇    下一篇

基于机器视觉与YOLO11n的女西裤尺寸自动测量

庹武1(), 刘琼洋1, 李庆响1, 陈谦1, 范睿鸽1, 李佩2   

  1. 1.中原工学院 智能服饰与服装学院, 河南 郑州 451191
    2.中原工学院纺织服装产业研究院, 河南 郑州 451191
  • 收稿日期:2025-07-24 修回日期:2025-09-15 出版日期:2025-12-15 发布日期:2026-02-06
  • 作者简介:庹武(1968—),女,教授,硕士。主要研究方向为服装结构技术。E-mail:tuowu@zut.edu.cn
  • 基金资助:
    河南省高等学校重点科研项目(25A540002)

Automatic size measurement of women's trousers based on machine vision and YOLO11n

TUO Wu1(), LIU Qiongyang1, LI Qingxiang1, CHEN Qian1, FAN Ruige1, LI Pei2   

  1. 1. College of Fashion Technology, Zhongyuan University of Technology, Zhengzhou, Henan 451191, China
    2. Research Institute of Textile and Clothing Industries, Zhongyuan University of Technology,Zhengzhou, Henan 451191, China
  • Received:2025-07-24 Revised:2025-09-15 Published:2025-12-15 Online:2026-02-06

摘要: 针对服装行业尺寸测量效率低、主观性强以及精度不稳定等问题,提出了一种基于机器视觉与深度学习模型的女西裤尺寸自动测量方法。构建装置采集女西裤图像,并设计棋格标定板标定相机参数,计算出女西裤像素尺寸与物理尺寸之间的映射关系;收集女西裤样本图像,建立以女西裤平铺图像为标准的数据集;对数据集图像进行预处理及标注,将划分好的数据集投入YOLO11n-pose姿态估计模型检测女西裤8个关键点;结合张正友相机标定法求出的像素当量得到女西裤实际尺寸。结果表明:模型关键点检测的平均精度均值均达到99%以上,具备高精度的关键点检测能力;对腰围、臀围、脚口宽、直裆和裤长5项关键部位尺寸测量的平均相对误差在0.58%~0.82%之间,均符合行业标准要求,为实现服装质检的智能化提供了有效的技术路径与支撑。

关键词: 女西裤, 机器视觉, 深度学习, 相机标定, YOLO11, 关键点检测, 西裤尺寸, 非接触式测量

Abstract:

Objective Addressing the issues of low efficiency, high subjectivity and unstable accuracy of traditional size measurement methods in the apparel industry, as well as the high cost and complex operation of existing three-dimensional measurement devices that hinder widespread adoption, this paper proposes an automatic measurement method for women's trousers that integrates machine vision with the YOLO11 deep learning model. The approach aims to achieve non-contact precision measurement of key sizes of women's trousers, thereby advancing digital and intelligent transformation in garment size inspection, while providing a robust technological solution for automated quality control in the apparel industry.

Method The measurement system comprised hardware and software components, where the hardware employed devices for image acquisition and the software performed size measurement. A dataset comprising 2000 images of women's trousers was constructed for the experiment. After image preprocessing, image annotation and format conversion, YOLO11n-pose pose estimation model was used to detect key points. The pixel coordinates of the detected key points were converted from the pixel distance to the actual physical distance using Zhang's camera calibration method, enabling calculation of sizes for five target regions. Three different women's trousers were selected to compare the automatic measurement results with the manual measurement results, and the accuracy of the measurement was evaluated by absolute error and relative error.

Results The experiment used the YOLO11n-pose key point detection model to train on a self-built dataset of women's trousers. Experimental results demonstrated a gradual decrease and eventual stabilization of the overall loss value, indicating good fitting performance. As training epochs increased, the model's precision, recall, and mAP(mean Average Precision) values gradually rose and became stabilized, indicating improved model performance with convergence. Specifically, the model achieved 100% precision and recall, with mAP50 and mAP50-95 reaching precision of 99.5% and 99.3% respectively. From the detection results, it was seen that the model acquired the ability to effectively detect key measurement points of women's trousers through learning a large amount of labeled data, providing reliable support for subsequent size measurement. In the camera calibration section, calibration plates were created and images of the calibration plates were captured to determine the pixel equivalent. The average reprojection error after calibration was 0.070 549 pixels, indicating high-precision calibration that meets experimental requirements. The pixel equivalent was ultimately calculated to be 0.298 913 mm/pixel. After converting pixel distances to actual physical distances, the sizes of women's trousers were calculated based on the vertical and horizontal Euclidean distances between two key positioning points. Three pairs of women's trousers were selected for comparing the automatic measurements with the manual measurements. The evaluation criteria selected were the absolute error and relative error. The average absolute errors between the measured and the actual sizes of the five measurement locations on the three pairs of women's trousers were calculated to be 5.45 mm, 5.89 mm, 1.27 mm, 2.35 mm, and 6.26 mm, respectively. The average relative errors were 0.82%, 0.61%, 0.58%, 0.64%, and 0.64%, respectively. The measurement results were found to comply with the industry standard requirements, indicating that the automatic measurement method for women's trouser sizes proposed in this experiment, based on machine vision and deep learning, has high accuracy and reliability.

Conclusion This study combines machine vision, automatic measurement system design, and the lightweight deep learning algorithm YOLO11n-pose to achieve non-contact precise measurement of key sizes of women's trousers. Experimental results show that the system's measurement error for women's trousers is consistently within the industry's acceptable range, validating the feasibility of this method in actual production. From an application perspective, this system breaks through the bottleneck of traditional manual measurement, providing apparel companies with an automated and standardized size inspection solution. In the future, the measurement robustness can be further improved, and the deep application of machine vision technology in size inspection across all apparel categories can be explored to drive the digital transformation of size measurement processes within the apparel industry.

Key words: woman's trouser, machine vision, deep learning, camera calibration, YOLO11, key point detection, trouser size, non-contact measurement

中图分类号: 

  • TS941.26

图1

测量系统结构图 注:①—测量工作台(吸风烫台);②—手机相机;③—软件设计(图像处理与检测计算);④—尺寸测量结果展示。"

图2

测量示意图"

图3

部分女西裤平铺图像"

图4

图像边缘复制填充原理图"

表1

实验环境配置"

配置 信息
操作系统 Windows 10
CPU Intel(R)Xeon(R)Gold 6230 CPU @2.10 GHz
GPU NVIDIA GeForce RTX 3090
开发环境 Pycharm 2024.1
编译环境 Python 3.11
深度学习框架 Pytorch 2.7.1+CUDA 12.8

图5

模型训练过程图"

图6

模型关键点训练结果图"

图7

重投影误差分布图"

表2

女西裤尺寸自动测量与实际测量结果"

女西裤
编号
部位 系统测量
均值/mm
人工测量
均值/mm
绝对误
差/mm
相对误
差/%
女西裤1 腰围
臀围
脚口宽
直裆
裤长
650.43
971.47
216.10
338.67
969.49
644.00
979.00
218.00
340.00
962.00
6.43
7.53
1.90
1.33
7.49
1.00
0.77
0.87
0.39
0.78
女西裤2 腰围
臀围
脚口宽
直裆
裤长
639.67
920.05
217.68
373.94
981.89
645.00
926.00
219.00
376.00
978.00
5.33
5.95
1.32
2.06
3.89
0.83
0.64
0.60
0.55
0.40
女西裤3 腰围
臀围
脚口宽
直裆
裤长
718.59
985.82
227.58
370.65
989.39
714.00
990.00
227.00
367.00
982.00
4.59
4.18
0.58
3.65
7.39
0.64
0.42
0.26
1.00
0.75

表3

女西裤5个测量部位的误差"

部位 绝对误差/mm 相对误差/%
女西
裤1
女西
裤2
女西
裤3
平均绝
对误差
女西
裤1
女西
裤2
女西
裤3
平均相
对误差
腰围 6.43 5.33 4.59 5.45 1.00 0.83 0.64 0.82
臀围 7.53 5.95 4.18 5.89 0.77 0.64 0.42 0.61
脚口宽 1.90 1.32 0.58 1.27 0.87 0.60 0.26 0.58
直裆 1.33 2.06 3.65 2.35 0.39 0.55 1.00 0.64
裤长 7.49 3.89 7.39 6.26 0.78 0.40 0.75 0.64
[1] 郭增荣. 面向服装电子商务的试衣机器人系统设计研究[D]. 杭州: 浙江大学, 2019: 1-4.
GUO Zengrong. Research and design of a fitting robot system for garment e-commerce[D]. Hangzhou: Zhejiang University, 2019: 1-4.
[2] 王晓菲, 王燕珍. 三维人体测量技术的发展现状及其应用[J]. 毛纺科技, 2021, 49(10): 106-111.
WANG Xiaofei, WANG Yanzhen. Development status and application of 3D anthropometric technology[J]. Wool Textile Journal, 2021, 49(10): 106-111.
[3] 董建明, 胡觉亮. 一种有效的服装尺寸自动测量方法[J]. 纺织学报, 2008, 29(5): 98-101.
DONG Jianming, HU Jueliang. An efficient method for automatic measurement of garment dimensions[J]. Journal of Textile Research, 2008, 29(5): 98-101.
[4] 李鹏飞, 郑明智, 景军锋. 基于机器视觉的服装尺寸在线测量系统[J]. 毛纺科技, 2017, 45(3): 42-47.
LI Pengfei, ZHENG Mingzhi, JING Junfeng. Measurement system of garment dimension based on machine vision[J]. Wool Textile Journal, 2017, 45(3): 42-47.
[5] 王生伟, 张侃健. 基于角点检测的服装尺寸在线测量技术[J]. 信息技术与信息化, 2018, 12: 73-75.
WANG Shengwei, ZHANG Kanjian. Online measurement technology for garment size based on image processing[J]. Information Technology and Informatization, 2018, 12: 73-75.
[6] 肖祎. 基于拍照的服装和人体尺寸测量系统设计与研发[D]. 杭州: 浙江大学, 2019: 16-37.
XIAO Yi. Design and development of a garment and human body measurement system based on photo-graphy[D]. Hangzhou: Zhejiang University, 2019: 16-37.
[7] LI C, XU Y, XIAO Y, et al. Automatic measurement of garment sizes using image recognition[C]// Graphics and Signal Processing. New York: Association for Computing Machinery, 2017: 30-34.
[8] SERRAT J, LUMBRERAS F, RUIZ I. Learning to measure for preshipment garment sizing[J]. Measurement, 2018, 130: 327-339.
doi: 10.1016/j.measurement.2018.08.019
[9] 王奕文. 古代汉服尺寸自动测量方法研究[D]. 杭州: 浙江理工大学, 2021: 23-50.
WANG Yiwen. Research on the automatic measurement method of ancient Hanfu size[D]. Hangzhou: Zhejiang Sci-Tech University, 2021: 23-50.
[10] 王奕文, 罗戎蕾, 康宇哲. 基于卷积神经网络的汉服关键尺寸自动测量[J]. 纺织学报, 2020, 41(12): 124-129.
doi: 10.13475/j.fzxb.20200505006
WANG Yiwen, LUO Ronglei, KANG Yuzhe. Automatic measurement of key dimensions for Han-style costumes based on use of convolutional neural network[J]. Journal of Textile Research, 2020, 41(12): 124-129.
doi: 10.13475/j.fzxb.20200505006
[11] PAULAUSKAITE-TARASEVICIENE A, NOREIKA E, PURTOKAS R, et al. An intelligent solution for automatic garment measurement using image recognition technologies[J]. Applied Sciences, 2022, 12(9): 4470.
doi: 10.3390/app12094470
[12] KIM S, MOON H, OH J, et al. Automatic measurements of garment sizes using computer vision deep learning models and point cloud data[J]. Applied Sciences, 2022, 12(10): 5286.
doi: 10.3390/app12105286
[13] 管昉立, 徐爱俊. 基于智能手机与机器视觉技术的立木胸径测量方法[J]. 浙江农林大学学报, 2018, 35(5): 892-899.
GUAN Fangli, XU Aijun. Tree DBH measurement method based on smartphone and machine vision technology[J]. Journal of Zhejiang A&F University, 2018, 35(5): 892-899.
[14] 庹武, 杜聪, 陈谦, 等. 基于计算机视觉与Canny算法的服装纸样轮廓提取[J]. 纺织学报, 2024, 45(5): 174-182.
TUO Wu, DU Cong, CHEN Qian, et al. Clothing pattern contour extraction based on computer vision and Canny algorithm[J]. Journal of Textile Research, 2024, 45(5): 174-182.
[15] 李惠敏. 多品种小规模服装缝制生产复杂性及其传递机制研究[D]. 长春: 吉林大学, 2023: 82-83.
LI Huimin. Research on fatigue resistance analysis and fatigue life prediction of hot rolling forming of straight face gear[D]. Changchun: Jilin University, 2023: 82-83.
[16] 申峻宇, 李东闻, 钟震宇, 等. 一种基于局部敏感哈希的文本数据去重算法及其实现[J]. 南开大学学报(自然科学版), 2023, 56(6): 29-35.
SHEN Junyu, LI Dongwen, ZHONG Zhenyu, et al. A text data deduplication algorithm based on locality-sensitive Hashing with implementation[J]. Journal of Nankai University(Natural Science), 2023, 56(6): 29-35.
[17] SAGER C, JANIESCH C, ZSCHECH P. A survey of image labelling for computer vision applications[J]. Journal of Business Analytics, 2021, 4(2): 91-110.
doi: 10.1080/2573234X.2021.1908861
[18] 刘付渝杰, 宋俊儒, 罗睿, 等. 基于机器视觉的砌墙砖自动检测系统[J]. 中国测试, 2022, 48(S2): 150-157.
LIU Fuyujie, SONG Junru, LUO Rui, et al. Automatic inspection system for masonry wall bricks based on machine vision[J]. China Measurement & Test, 2022, 48(S2): 150-157.
[19] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334.
doi: 10.1109/34.888718
[20] 朱一鑫, 董帅, 陈凡秀, 等. 标定板位姿对相机标定精度的影响[J]. 实验力学, 2024, 39(6): 705-718.
ZHU Yixin, DONG Shuai, CHEN Fanxiu, et al. Influence of the position and posture of the calibration board on camera calibration[J]. Journal of Experimental Mechanics, 2024, 39(6): 705-718.
[21] 郝永平, 王永杰, 张嘉易, 等. 面向视觉测量的像素当量标定方法[J]. 纳米技术与精密工程, 2014, 12(5): 373-380.
HAO Yongping, WANG Yongjie, ZHANG Jiayi, et al. Pixel equivalent calibration method for vision measurement[J]. Nanotechnology and Precision Engineering, 2014, 12(5): 373-380.
[1] 周青青, 常硕, 毛志平, 吴伟. 人工智能在纺织印染行业中的应用研究进展[J]. 纺织学报, 2025, 46(12): 260-269.
[2] 余志才, 余晓娜, 丁笑君, 顾冰菲. 基于PointNet分类模型的织物三维悬垂模型匹配[J]. 纺织学报, 2025, 46(11): 111-117.
[3] 吴威涛, 韩奥博, 牛奎, 贾建辉, 尹邦雄, 向忠. 基于高泛化性图像生成与分类算法的织物瑕疵检测系统设计[J]. 纺织学报, 2025, 46(10): 227-236.
[4] 朱耀麟, 李政, 张强, 陈鑫, 陈锦妮, 张洪松. 基于近红外光谱和多特征网络的羊毛和羊绒定量检测[J]. 纺织学报, 2025, 46(09): 104-111.
[5] 王青, 姜越夫, 赵恬恬, 赵世航, 刘甲怡. 基于深度学习的纱管位姿估计方法及抓取实验[J]. 纺织学报, 2025, 46(07): 217-226.
[6] 任志墨, 张文昌, 李贞逸, 叶贺, 杨春柳, 张倩. 基于双目结构光的纺织圆柱件三维视觉定位方法[J]. 纺织学报, 2025, 46(07): 227-235.
[7] 许纶有, 邹鲲, 吴浩男. 基于机器视觉的浆纱机经轴区断纱故障检测[J]. 纺织学报, 2025, 46(06): 231-239.
[8] 顾孟尚, 张宁, 潘如如, 高卫东. 结合频域卷积模块的机织物图像疵点目标检测[J]. 纺织学报, 2025, 46(05): 159-168.
[9] 李吉国, 景军锋, 程为, 王永波, 刘薇. 基于机器视觉的玻璃纤维纱团外观缺陷检测系统设计[J]. 纺织学报, 2025, 46(05): 243-251.
[10] 白雨薇, 徐健, 朱耀麟, 丁展博, 刘晨雨. 基于改进YOLOv8的梳棉机棉网上棉结检测方法[J]. 纺织学报, 2025, 46(03): 56-63.
[11] 黄小源, 侯珏, 杨阳, 刘正. 基于改进深度学习模型的高精度服装样板自动生成[J]. 纺织学报, 2025, 46(02): 236-243.
[12] 蔡丽玲, 王梅, 邵一兵, 陈炜, 曹华卿, 季晓芬. 基于改进堆叠生成对抗网络的传统汉服智能定制推荐[J]. 纺织学报, 2024, 45(12): 180-188.
[13] 任柯, 周衡书, 魏瑾瑜, 闫文君, 左言文. 基于机器视觉技术的百褶裙动态美感评价[J]. 纺织学报, 2024, 45(12): 189-198.
[14] 刘燕萍, 郭佩瑶, 吴莹. 面向织物疵点检测的深度学习技术应用研究进展[J]. 纺织学报, 2024, 45(12): 234-242.
[15] 任佳伟, 周其洪, 陈唱, 洪巍, 岑均豪. 基于机器视觉的交叉缠绕式筒子纱位姿检测方法[J]. 纺织学报, 2024, 45(11): 207-214.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!