Journal of Textile Research ›› 2025, Vol. 46 ›› Issue (12): 208-215.doi: 10.13475/j.fzxb.20250705901

• Appared Engineering • Previous Articles     Next Articles

Automatic size measurement of women's trousers based on machine vision and YOLO11n

TUO Wu1(), LIU Qiongyang1, LI Qingxiang1, CHEN Qian1, FAN Ruige1, LI Pei2   

  1. 1. College of Fashion Technology, Zhongyuan University of Technology, Zhengzhou, Henan 451191, China
    2. Research Institute of Textile and Clothing Industries, Zhongyuan University of Technology,Zhengzhou, Henan 451191, China
  • Received:2025-07-24 Revised:2025-09-15 Online:2025-12-15 Published:2026-02-06

Abstract:

Objective Addressing the issues of low efficiency, high subjectivity and unstable accuracy of traditional size measurement methods in the apparel industry, as well as the high cost and complex operation of existing three-dimensional measurement devices that hinder widespread adoption, this paper proposes an automatic measurement method for women's trousers that integrates machine vision with the YOLO11 deep learning model. The approach aims to achieve non-contact precision measurement of key sizes of women's trousers, thereby advancing digital and intelligent transformation in garment size inspection, while providing a robust technological solution for automated quality control in the apparel industry.

Method The measurement system comprised hardware and software components, where the hardware employed devices for image acquisition and the software performed size measurement. A dataset comprising 2000 images of women's trousers was constructed for the experiment. After image preprocessing, image annotation and format conversion, YOLO11n-pose pose estimation model was used to detect key points. The pixel coordinates of the detected key points were converted from the pixel distance to the actual physical distance using Zhang's camera calibration method, enabling calculation of sizes for five target regions. Three different women's trousers were selected to compare the automatic measurement results with the manual measurement results, and the accuracy of the measurement was evaluated by absolute error and relative error.

Results The experiment used the YOLO11n-pose key point detection model to train on a self-built dataset of women's trousers. Experimental results demonstrated a gradual decrease and eventual stabilization of the overall loss value, indicating good fitting performance. As training epochs increased, the model's precision, recall, and mAP(mean Average Precision) values gradually rose and became stabilized, indicating improved model performance with convergence. Specifically, the model achieved 100% precision and recall, with mAP50 and mAP50-95 reaching precision of 99.5% and 99.3% respectively. From the detection results, it was seen that the model acquired the ability to effectively detect key measurement points of women's trousers through learning a large amount of labeled data, providing reliable support for subsequent size measurement. In the camera calibration section, calibration plates were created and images of the calibration plates were captured to determine the pixel equivalent. The average reprojection error after calibration was 0.070 549 pixels, indicating high-precision calibration that meets experimental requirements. The pixel equivalent was ultimately calculated to be 0.298 913 mm/pixel. After converting pixel distances to actual physical distances, the sizes of women's trousers were calculated based on the vertical and horizontal Euclidean distances between two key positioning points. Three pairs of women's trousers were selected for comparing the automatic measurements with the manual measurements. The evaluation criteria selected were the absolute error and relative error. The average absolute errors between the measured and the actual sizes of the five measurement locations on the three pairs of women's trousers were calculated to be 5.45 mm, 5.89 mm, 1.27 mm, 2.35 mm, and 6.26 mm, respectively. The average relative errors were 0.82%, 0.61%, 0.58%, 0.64%, and 0.64%, respectively. The measurement results were found to comply with the industry standard requirements, indicating that the automatic measurement method for women's trouser sizes proposed in this experiment, based on machine vision and deep learning, has high accuracy and reliability.

Conclusion This study combines machine vision, automatic measurement system design, and the lightweight deep learning algorithm YOLO11n-pose to achieve non-contact precise measurement of key sizes of women's trousers. Experimental results show that the system's measurement error for women's trousers is consistently within the industry's acceptable range, validating the feasibility of this method in actual production. From an application perspective, this system breaks through the bottleneck of traditional manual measurement, providing apparel companies with an automated and standardized size inspection solution. In the future, the measurement robustness can be further improved, and the deep application of machine vision technology in size inspection across all apparel categories can be explored to drive the digital transformation of size measurement processes within the apparel industry.

Key words: woman's trouser, machine vision, deep learning, camera calibration, YOLO11, key point detection, trouser size, non-contact measurement

CLC Number: 

  • TS941.26

Fig.1

Measurement system structure diagram"

Fig.2

Measurement schematic diagram"

Fig.3

Partial tiled images of women's trousers"

Fig.4

Image edge replication padding schematic diagram"

Tab.1

Experimental environment configuration"

配置 信息
操作系统 Windows 10
CPU Intel(R)Xeon(R)Gold 6230 CPU @2.10 GHz
GPU NVIDIA GeForce RTX 3090
开发环境 Pycharm 2024.1
编译环境 Python 3.11
深度学习框架 Pytorch 2.7.1+CUDA 12.8

Fig.5

Model training process diagram"

Fig.6

Model key point training results diagram. (a) Key point loss of training;(b) Key point loss of validation;(c) Precision of training key point;(d) Recall of training key point;(e) Mean average precision 50 of training key point; (f) Mean average precision 50-95 of training key point"

Fig.7

Reprojection error distribution diagram"

Tab.2

Automatic measurement and actual measurement results of women's trousers sizes"

女西裤
编号
部位 系统测量
均值/mm
人工测量
均值/mm
绝对误
差/mm
相对误
差/%
女西裤1 腰围
臀围
脚口宽
直裆
裤长
650.43
971.47
216.10
338.67
969.49
644.00
979.00
218.00
340.00
962.00
6.43
7.53
1.90
1.33
7.49
1.00
0.77
0.87
0.39
0.78
女西裤2 腰围
臀围
脚口宽
直裆
裤长
639.67
920.05
217.68
373.94
981.89
645.00
926.00
219.00
376.00
978.00
5.33
5.95
1.32
2.06
3.89
0.83
0.64
0.60
0.55
0.40
女西裤3 腰围
臀围
脚口宽
直裆
裤长
718.59
985.82
227.58
370.65
989.39
714.00
990.00
227.00
367.00
982.00
4.59
4.18
0.58
3.65
7.39
0.64
0.42
0.26
1.00
0.75

Tab.3

Error of five key measurement positions on women's trousers"

部位 绝对误差/mm 相对误差/%
女西
裤1
女西
裤2
女西
裤3
平均绝
对误差
女西
裤1
女西
裤2
女西
裤3
平均相
对误差
腰围 6.43 5.33 4.59 5.45 1.00 0.83 0.64 0.82
臀围 7.53 5.95 4.18 5.89 0.77 0.64 0.42 0.61
脚口宽 1.90 1.32 0.58 1.27 0.87 0.60 0.26 0.58
直裆 1.33 2.06 3.65 2.35 0.39 0.55 1.00 0.64
裤长 7.49 3.89 7.39 6.26 0.78 0.40 0.75 0.64
[1] 郭增荣. 面向服装电子商务的试衣机器人系统设计研究[D]. 杭州: 浙江大学, 2019: 1-4.
GUO Zengrong. Research and design of a fitting robot system for garment e-commerce[D]. Hangzhou: Zhejiang University, 2019: 1-4.
[2] 王晓菲, 王燕珍. 三维人体测量技术的发展现状及其应用[J]. 毛纺科技, 2021, 49(10): 106-111.
WANG Xiaofei, WANG Yanzhen. Development status and application of 3D anthropometric technology[J]. Wool Textile Journal, 2021, 49(10): 106-111.
[3] 董建明, 胡觉亮. 一种有效的服装尺寸自动测量方法[J]. 纺织学报, 2008, 29(5): 98-101.
DONG Jianming, HU Jueliang. An efficient method for automatic measurement of garment dimensions[J]. Journal of Textile Research, 2008, 29(5): 98-101.
[4] 李鹏飞, 郑明智, 景军锋. 基于机器视觉的服装尺寸在线测量系统[J]. 毛纺科技, 2017, 45(3): 42-47.
LI Pengfei, ZHENG Mingzhi, JING Junfeng. Measurement system of garment dimension based on machine vision[J]. Wool Textile Journal, 2017, 45(3): 42-47.
[5] 王生伟, 张侃健. 基于角点检测的服装尺寸在线测量技术[J]. 信息技术与信息化, 2018, 12: 73-75.
WANG Shengwei, ZHANG Kanjian. Online measurement technology for garment size based on image processing[J]. Information Technology and Informatization, 2018, 12: 73-75.
[6] 肖祎. 基于拍照的服装和人体尺寸测量系统设计与研发[D]. 杭州: 浙江大学, 2019: 16-37.
XIAO Yi. Design and development of a garment and human body measurement system based on photo-graphy[D]. Hangzhou: Zhejiang University, 2019: 16-37.
[7] LI C, XU Y, XIAO Y, et al. Automatic measurement of garment sizes using image recognition[C]// Graphics and Signal Processing. New York: Association for Computing Machinery, 2017: 30-34.
[8] SERRAT J, LUMBRERAS F, RUIZ I. Learning to measure for preshipment garment sizing[J]. Measurement, 2018, 130: 327-339.
doi: 10.1016/j.measurement.2018.08.019
[9] 王奕文. 古代汉服尺寸自动测量方法研究[D]. 杭州: 浙江理工大学, 2021: 23-50.
WANG Yiwen. Research on the automatic measurement method of ancient Hanfu size[D]. Hangzhou: Zhejiang Sci-Tech University, 2021: 23-50.
[10] 王奕文, 罗戎蕾, 康宇哲. 基于卷积神经网络的汉服关键尺寸自动测量[J]. 纺织学报, 2020, 41(12): 124-129.
doi: 10.13475/j.fzxb.20200505006
WANG Yiwen, LUO Ronglei, KANG Yuzhe. Automatic measurement of key dimensions for Han-style costumes based on use of convolutional neural network[J]. Journal of Textile Research, 2020, 41(12): 124-129.
doi: 10.13475/j.fzxb.20200505006
[11] PAULAUSKAITE-TARASEVICIENE A, NOREIKA E, PURTOKAS R, et al. An intelligent solution for automatic garment measurement using image recognition technologies[J]. Applied Sciences, 2022, 12(9): 4470.
doi: 10.3390/app12094470
[12] KIM S, MOON H, OH J, et al. Automatic measurements of garment sizes using computer vision deep learning models and point cloud data[J]. Applied Sciences, 2022, 12(10): 5286.
doi: 10.3390/app12105286
[13] 管昉立, 徐爱俊. 基于智能手机与机器视觉技术的立木胸径测量方法[J]. 浙江农林大学学报, 2018, 35(5): 892-899.
GUAN Fangli, XU Aijun. Tree DBH measurement method based on smartphone and machine vision technology[J]. Journal of Zhejiang A&F University, 2018, 35(5): 892-899.
[14] 庹武, 杜聪, 陈谦, 等. 基于计算机视觉与Canny算法的服装纸样轮廓提取[J]. 纺织学报, 2024, 45(5): 174-182.
TUO Wu, DU Cong, CHEN Qian, et al. Clothing pattern contour extraction based on computer vision and Canny algorithm[J]. Journal of Textile Research, 2024, 45(5): 174-182.
[15] 李惠敏. 多品种小规模服装缝制生产复杂性及其传递机制研究[D]. 长春: 吉林大学, 2023: 82-83.
LI Huimin. Research on fatigue resistance analysis and fatigue life prediction of hot rolling forming of straight face gear[D]. Changchun: Jilin University, 2023: 82-83.
[16] 申峻宇, 李东闻, 钟震宇, 等. 一种基于局部敏感哈希的文本数据去重算法及其实现[J]. 南开大学学报(自然科学版), 2023, 56(6): 29-35.
SHEN Junyu, LI Dongwen, ZHONG Zhenyu, et al. A text data deduplication algorithm based on locality-sensitive Hashing with implementation[J]. Journal of Nankai University(Natural Science), 2023, 56(6): 29-35.
[17] SAGER C, JANIESCH C, ZSCHECH P. A survey of image labelling for computer vision applications[J]. Journal of Business Analytics, 2021, 4(2): 91-110.
doi: 10.1080/2573234X.2021.1908861
[18] 刘付渝杰, 宋俊儒, 罗睿, 等. 基于机器视觉的砌墙砖自动检测系统[J]. 中国测试, 2022, 48(S2): 150-157.
LIU Fuyujie, SONG Junru, LUO Rui, et al. Automatic inspection system for masonry wall bricks based on machine vision[J]. China Measurement & Test, 2022, 48(S2): 150-157.
[19] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334.
doi: 10.1109/34.888718
[20] 朱一鑫, 董帅, 陈凡秀, 等. 标定板位姿对相机标定精度的影响[J]. 实验力学, 2024, 39(6): 705-718.
ZHU Yixin, DONG Shuai, CHEN Fanxiu, et al. Influence of the position and posture of the calibration board on camera calibration[J]. Journal of Experimental Mechanics, 2024, 39(6): 705-718.
[21] 郝永平, 王永杰, 张嘉易, 等. 面向视觉测量的像素当量标定方法[J]. 纳米技术与精密工程, 2014, 12(5): 373-380.
HAO Yongping, WANG Yongjie, ZHANG Jiayi, et al. Pixel equivalent calibration method for vision measurement[J]. Nanotechnology and Precision Engineering, 2014, 12(5): 373-380.
[1] ZHOU Qingqing, CHANG Shuo, MAO Zhiping, WU Wei. Research progress in applications of artificial intelligence in dyeing and finishing industry [J]. Journal of Textile Research, 2025, 46(12): 260-269.
[2] YU Zhicai, YU Xiaona, DING Xiaojun, GU Bingfei. Matching of three-dimensional fabric drape models based on PointNet classification modeling [J]. Journal of Textile Research, 2025, 46(11): 111-117.
[3] WU Weitao, HAN Aobo, NIU Kui, JIA Jianhui, YIN Bangxiong, XIANG Zhong. Design of fabric defect detection system based on high generalization image generation and classification algorithm [J]. Journal of Textile Research, 2025, 46(10): 227-236.
[4] ZHU Yaolin, LI Zheng, ZHANG Qiang, CHEN Xin, CHEN Jinni, ZHANG Hongsong. Quantitative detection of wool and cashmere based on near infrared spectroscopy and multi-feature network [J]. Journal of Textile Research, 2025, 46(09): 104-111.
[5] WANG Qing, JIANG Yuefu, ZHAO Tiantian, ZHAO Shihang, LIU Jiayi. Pose estimation and bobbin grasping based on deep learning methods [J]. Journal of Textile Research, 2025, 46(07): 217-226.
[6] REN Zhimo, ZHANG Wenchang, LI Zhenyi, YE He, YANG Chunliu, ZHANG Qian. Three-dimensional visual positioning method of textile cylindrical components via binocular structured light [J]. Journal of Textile Research, 2025, 46(07): 227-235.
[7] XU Lunyou, ZOU Kun, WU Haonan. Broken yarn detection on warp beam zone of sizing machine based on machine vision [J]. Journal of Textile Research, 2025, 46(06): 231-239.
[8] GU Mengshang, ZHANG Ning, PAN Ruru, GAO Weidong. Object detection of weaving fabric defects using frequency-domain convolution modules [J]. Journal of Textile Research, 2025, 46(05): 159-168.
[9] LI Jiguo, JING Junfeng, CHENG Wei, WANG Yongbo, LIU Wei. Design of machine vision-based system for detecting appearance defects in glass fiber yarn clusters [J]. Journal of Textile Research, 2025, 46(05): 243-251.
[10] BAI Yuwei, XU Jian, ZHU Yaolin, DING Zhanbo, LIU Chenyu. Image detection of cotton nep in carding net based on improved YOLOv8 [J]. Journal of Textile Research, 2025, 46(03): 56-63.
[11] HUANG Xiaoyuan, HOU Jue, YANG Yang, LIU Zheng. Automatic generation of high-precision garment patterns based on improved deep learning model [J]. Journal of Textile Research, 2025, 46(02): 236-243.
[12] CAI Liling, WANG Mei, SHAO Yibing, CHEN Wei, CAO Huaqing, JI Xiaofen. Intelligent customization recommendation for traditional Hanfu based on improved stack-generative adversarial network [J]. Journal of Textile Research, 2024, 45(12): 180-188.
[13] REN Ke, ZHOU Hengshu, WEI Jinyu, YAN Wenjun, ZUO Yanwen. Dynamic aesthetic evaluation of pleated skirts based on machine vision technology [J]. Journal of Textile Research, 2024, 45(12): 189-198.
[14] LIU Yanping, GUO Peiyao, WU Ying. Research progress in deep learning technology for fabric defect detection [J]. Journal of Textile Research, 2024, 45(12): 234-242.
[15] REN Jiawei, ZHOU Qihong, CHEN Chang, HONG Wei, CEN Junhao. Detection method of position and posture of cheese yarn based on machine vision [J]. Journal of Textile Research, 2024, 45(11): 207-214.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!