纺织学报 ›› 2023, Vol. 44 ›› Issue (03): 168-175.doi: 10.13475/j.fzxb.20220102308

• 服装工程 • 上一篇    下一篇

复杂背景下人体轮廓及其参数提取

顾冰菲1,2,3, 张健1, 徐凯忆1, 赵崧灵1, 叶凡1, 侯珏1,2,3()   

  1. 1.浙江理工大学 服装学院, 浙江 杭州 310018
    2.浙江省服装工程技术研究中心, 浙江 杭州 310018
    3.丝绸文化传承与产品设计数字化技术文化和旅游部重点实验室, 浙江 杭州 310018
  • 收稿日期:2022-01-13 修回日期:2022-10-27 出版日期:2023-03-15 发布日期:2023-04-14
  • 通讯作者: 侯珏(1990—),男,讲师,博士。主要研究方向为机器视觉技术,纺织品检测。E-mail:hj1990@zstu.edu.cn
  • 作者简介:顾冰菲(1987—),女,副教授,博士。主要研究方向为数字化服装技术。
  • 基金资助:
    国家自然科学基金项目(61702461);中国纺织工业联合会科技指导性项目(2018079);中国纺织工业联合会应用基础研究项目(J202007);浙江理工大学优秀研究生学位论文培育基金项目(LW-YP2021054)

Human contour and parameter extraction from complex background

GU Bingfei1,2,3, ZHANG Jian1, XU Kaiyi1, ZHAO Songling1, YE Fan1, HOU Jue1,2,3()   

  1. 1. School of Fashion Design & Engineering, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China
    2. Clothing Engineering Research Center of Zhejiang Province, Hangzhou, Zhejiang 310018, China
    3. Key Laboratory of Silk Culture Heritage and Products Design Digital Technology, Ministry of Culture and Tourism, Hangzhou, Zhejiang 310018, China
  • Received:2022-01-13 Revised:2022-10-27 Published:2023-03-15 Online:2023-04-14

摘要:

针对基于人体照片的尺寸提取技术对照片拍摄场景限制的问题,提出利用整体嵌套边缘检测深度学习模型实现复杂背景下人体轮廓的提取并进行参数提取分析。以450张不同背景人体照片为原始图像数据集,通过人体轮廓标签图制作与数据增强手段建立了43 200张图片训练集,利用深度学习网络模型进行训练学习并构建最优边缘检测模型;最后选取40名样本作为验证对象,以13个人体比例、角度等参数作为验证参数,对人体轮廓提取值与三维点云测量值进行误差分析。结果表明,本文研究成果能够快速实现复杂背景下人体轮廓的自动提取,且人体轮廓提取值与三维点云测量值的角度参数误差小于2°,比例参数误差小于0.09,为非接触式二维测量技术的进一步研究提供理论依据和技术支撑。

关键词: 复杂背景, 深度学习, 人体轮廓提取, 整体嵌套边缘检测网络, 二维照片

Abstract:

Objective For the collection of human body images in non-contact two-dimensional measurement systems, most of the early studies reduced the difficulty of human contour extraction by means of restricting the shooting background. In order to acquire human contour and parameter information quickly and easily from photographs with a complex background, this study proposes the use of holistically-nested edge detection (HED) deep learning model to achieve the extraction of human contours and perform parameter extraction analysis.

Method A training set of 43 200 images was established by creating a human contour label map, data enhancement and pre-processing with 450 human photos with different backgrounds as the original image dataset. A deep learning network model was used for training and learning, and the optimal edge detection model was constructed after repeated training and tuning to achieve automatic human contour extraction. In addition, in order to verify that the extracted human silhouette form is consistent with the real human form, 13 human parameters are selected for error analysis.

Results After training and tuning the deep learning model to obtain the optimal model for human contour extraction, the test-set images were subsequently output (Fig.5), which shows that the optimized HED network model training extracteds clear human contours without other cluttered background information, but the human contour edge lines was thick. Therefore, the Zhang-Suen edge refinement algorithm was adoptrd to refine the human contours (Fig.5(d)). It can be seen that after the edge refinement, not only the edges of the human contour can be detailed and clear with no break in continuity, but also some non-human contour details in the original contour map can be removed to make the background cleaner. In addition, in order to verify the accuracy of extracting human contours, 40 subjects were selected and subjected to 3-D anthropometric measurements and 2-D photographs, respectively. Based on the human body photograph to extract the human body contour and feature point identification positioning, 13 measurements were extracted and compared with the corresponding 3-D point cloud manual measurements (Tab.3). It can be seen that the error range of 3 angle parameters are between 0.125 3°and 1.862 2°, and the error range of 10 ratio parameters are between 0.000 2 and 0.081 7. The data indicated that there was no significant difference between the contour measurements and the 3-D real values, and a high consistency was maintained between them, which further verified the feasibility and accuracy of extracting human contours based on 2-D photographs.

Conclusion This paper proposed a deep learning-based method for human contour extraction from complex backgrounds, use the HED network optimization model as the main framework to train and learn the human photo dataset, achieving the contour extraction and edge refinement of human photos based on complex backgrounds. Meanwhile, in order to verify the authenticity of the photo-extracted contours, 13 scale and angle parameters that can reflect the human body shape were selected, and error analysis was performed on the contour extraction values and 3-D point cloud measurements of 40 subjects. The results show that the improved HED network model can accurately extract clear and continuous human contours based on complex backgrounds with cleaner backgrounds; and there is no significant difference between the extracted values of contour-based parameters and 3-D point cloud measurements. This research proves the feasibility and accuracy of the method adopted in this research, and the research results can provide technical support for the research of non-contact 2-D measurement technology.

Key words: complex background, deep learning, human contour extraction, holistically-nested edge detection network, two-dimensional photo

中图分类号: 

  • TS941.17

图1

原始图像与人体轮廓标签图"

表1

网络训练样本数量配置"

数据类型 原始图像样本数 增强图像样本数 样本总量
训练集 400 42 800 43 200
验证集 30 210 240
测试集 20 140 160

图2

数据增强"

图3

HED(VGG19)网络模型结构"

图4

网络结构修改前后训练结果对比"

图5

形态参数测量示意图"

表2

形态参数具体定义"

序号 测量项目 测量及计算方法 序号 测量项目 测量及计算方法
1 身高(H) 头顶点至地面的垂直距离 14 颈身高比(RHN) 颈高(HN)/身高(H)
2 颈高(HN) 颈部截面至地面的垂直距离 15 肩身高比(RHS) 肩高(HS)/身高(H)
3 肩高(HS) 肩部截面至地面的垂直距离 16 腋身高比(RHA) 腋下高(HA)/身高(H)
4 腋下高(HA) 腋下部截面至地面的垂直距离 17 臀身高比(RHH) 臀高(HH)/身高(H)
5 臀高(HH) 臀部截面至地面的垂直距离 18 颈肩宽比(RWNS) 颈宽(WN)/肩宽(WS)
6 颈宽(WN) 左颈点(PLN)与右颈点(PRN)的水平距离 19 颈腋宽比(RWNA) 颈宽(WN)/腋下宽(WA)
7 颈厚(TN) 前颈点(PFN)与后颈点(PBN)的水平距离 20 颈臀宽比( R W N H) 颈宽(WN)/臀宽(WH)
8 肩宽(WS) 左肩端点(PLS)与右肩端点(PRS)的水平距离 21 颈肩厚比( R T N S) 颈宽(TN)/肩宽(TS)
9 肩厚(TS) 肩部截面外接矩形厚度( P F S P B S的水平距离) 22 颈腋厚比( R T N A) 颈宽(TN)/腋下宽(TA)
10 腋下宽(WA) 左腋点(PLA)与右腋点(PRA)的水平距离 23 颈臀厚比( R T N H) 颈宽(TN)/臀宽(TH)
11 腋下厚(TA) 腋下部截面外接矩形厚度(PFAPBA的水平距离) 24 肩斜角(AST) 右颈点( P R N)和右肩端点( P R S)的连线与水平线的夹角
12 臀宽(WW) 左臀点( P L H)与右臀点( P R H)的水平距离 25 背入角(ADE) 背凸点(PB)和后颈点(PBN)的连线与垂直线的夹角
13 臀厚(TW) 臀部截面外接矩形厚度(PFH)与(PBH)的水平距离 26 臀凸角(AHB) 臀凸点(PBH)与后腰点(PBW)的连线与垂直线的夹角

图6

人体肩部局部曲线拟合"

表3

误差分析表"

指标 颈身高比(RHN) 肩身高比(RHS) 腋身高比(RHA) 臀身高比(RHH) 颈肩宽比(RWNS) 颈腋宽比(RWNA) 颈臀宽比( R W N H)
CMV 0.845 1 0.807 7 0.744 8 0.518 4 0.317 8 0.383 0 0.367 9
RV 0.837 1 0.789 0 0.736 4 0.493 2 0.317 5 0.383 8 0.385 0
AE 0.009 3 0.020 4 0.015 5 0.025 2 0.012 7 0.014 5 0.019 1
ER 0.001 5~0.021 1 0.003 9~0.037 3 0.000 2~0.042 1 0.006 7~0.040 6 0.001 8~0.031 9 0.000 4~0.038 7 0.001 3~0.037 5
指标 颈肩厚比(RWNS) 颈腋厚比(RWNA) 颈臀厚比( R W N H) 肩斜角(AST)/(°) 背入角(ADE)/(°) 臀凸角(AHB)/(°)
CMV 0.722 5 0.570 4 0.519 9 26.668 7 19.437 9 13.145 2
RV 0.685 4 0.537 2 0.494 2 27.581 7 19.064 8 12.416 5
AE 0.081 6 0.042 7 0.032 6 0.998 6 1.185 7 0.778 6
ER 0.001 6~0.080 9 0.014 1~0.07 0.001 4~0.081 7 0.125 3~1.736 5 0.243 8~1.354 3 0.271 4~1.862 2
[1] GU B, LIU G, XU B. Individualizing women's suit patterns using body measurements from two-dimensional images[J]. Textile Research Journal, 2017, 87(6): 669-681.
doi: 10.1177/0040517516636001
[2] SATAM D, LIU Y, LEE H J. Intelligent design systems for apparel mass customization[J]. The Journal of The Textile Institute, 2011, 102(4): 353-365.
doi: 10.1080/00405000.2010.482351
[3] 王婷, 顾冰菲. 基于图像的人体颈肩部三维模型构建[J]. 纺织学报, 2021, 42(1): 125-132.
WANG Ting, GU Bingfei. 3-D modeling of neck-shoulder part based on human photos[J]. Journal of Textile Research, 2021, 42(1): 125-132.
doi: 10.1177/004051757204200210
[4] 甘应进, 陈东生, 孟爽, 等. 非接触式三维人体计测现状[J]. 纺织学报, 2005, 26(3): 145-161.
GAN Yingjin, CHEN Dongsheng, MENG Shuang, et al. Recent development of non-touch 3D body measure-ment[J]. Journal of Textile Research, 2005, 26(3): 145-161.
doi: 10.1177/004051755602600211
[5] GU B, LIU G, XU B. Girth prediction of young female body using orthogonal silhouettes[J]. The Journal of The Textile Institute, 2017, 108(1): 140-146.
doi: 10.1080/00405000.2016.1160756
[6] 顾冰菲, 李欣华, 钟泽君, 等. 基于人体数字图像的青年女体围度拟合[J]. 丝绸, 2019, 56(8): 46-51.
GU Bingfei, LI Xinghua, ZHONG Zejun, et al. Girth fitting of young women based on body digital images[J]. Journal of Silk, 2019, 56(8): 46-51.
[7] 王婷, 顾冰菲. 基于二维图像的青年女性颈肩部形态自动识别[J]. 纺织学报, 2020, 41(12): 111-117.
WANG Ting, GU Bingfei. Automatic identification of young women's neck-shoulder shapes based on images[J]. Journal of Textile Research, 2020, 41(12): 111-117.
[8] 冯文倩, 李新荣, 杨帅. 人体轮廓机器视觉检测算法的研究进展[J]. 纺织学报, 2021, 42(3): 190-196.
FENG Wenqian, LI Xinrong, YANG Shuai. Research progress in machine vision algorithm for human contour detection[J]. Journal of Textile Research, 2021, 42(3): 190-196.
[9] 李科, 毋涛, 刘青青. 基于深度图与改进Canny算法的人体轮廓提取[J]. 计算机技术与发展, 2021, 31(5): 67-72.
LI Ke, WU Tao, LIU Qingqing. Human contour extraction based on depth map and improved canny algorithm[J]. Computer Technology Development, 2021, 31(5): 67-72.
[10] 李翠锦, 瞿中. 基于深度学习的图像边缘检测算法综述[J]. 计算机应用, 2020, 40(11): 3280-3288.
doi: 10.11772/j.issn.1001-9081.2020030314
LI Cuijin, QU Zhong. Review of image edge detection algorithms based on deep learning[J]. Journal of Computer Applications, 2020, 40(11): 3280-3288.
doi: 10.11772/j.issn.1001-9081.2020030314
[11] 吴泽斌, 张东亮, 李基拓, 等. 复杂场景下的人体轮廓提取及尺寸测量[J]. 图学学报, 2020, 41(5): 740-749.
WU Zebin, ZHANG Dongliang, LI Jituo, et al. Contour recognition and information extraction of human bodies in complex scenes[J]. Journal of Graphics, 2020, 41(5): 740-749.
[12] DE SOUZA J W, HOLANDA G B, IVO R F, et al. Predicting body measures from 2D images using convolutional neural networks[C]// 2020 International Joint Conference on Neural Networks (IJCNN). Glasgow:IEEE, 2020: 1-6.
[13] SHEN X, HERTZMANN A, JIA J, et al. Automatic portrait segmentation for image stylization[J]. Computer Graphics Forum, 2016, 35(2): 93-102.
doi: 10.1111/cgf.12814
[14] XIE S, TU Z. Holistically-nested edge detection[C]// 2015 IEEE International Conference on Computer Vision (ICCV). Santiago:IEEE, 2015: 1395-1403.
[15] 赵启雯, 徐琨, 徐源. 基于HED网络的快速纸张边缘测方法[J]. 计算机与现代化, 2021(5): 1-5.
ZHAO Qiwen, XU Kun, XU Yuan. Fast paper edge detection method based on HED Network[J]. Computer and Modernization, 2021(5): 1-5.
[16] 焦安波, 何淼, 罗海波. 一种改进的HED网络及其在边缘检测中的应用[J]. 红外技术, 2019, 41(1): 72-77.
JIAO Anbo, HE Miao, LUO Haibo. Research on significant edge detection of infrared image based on deep learning[J]. Infrared Technology, 2019, 41(1): 72-77.
[17] LU J, BEHBOOD V, HAO P. Transfer learning using computational intelligence: a survey[J]. Knowledge-Based Systems, 2015, 80: 14-23.
doi: 10.1016/j.knosys.2015.01.010
[18] DENG J, DONG W, SOCHER R. Imagenet: a large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami:IEEE, 2009: 248-255.
[19] EVERINGHAM M, VAN Gool L, WILLIAMS C K I, et al. The pascal visual object classes (voc) challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303-338.
doi: 10.1007/s11263-009-0275-4
[20] 赵小虎, 李晓, 叶圣, 等. 基于改进U-Net网络的多尺度番茄病害分割算法[J]. 计算机工程与应用, 2022, 58(10): 216-223.
doi: 10.3778/j.issn.1002-8331.2105-0201
ZHAO Xiaohu, LI Xiao, YE Sheng, et al. Multi-scale tomato disease segmentation algorithm based on improved U-net network[J]. Computer Engineering and Applications, 2022, 58(10): 216-223.
doi: 10.3778/j.issn.1002-8331.2105-0201
[21] ZHANG T Y, SUEN C Y. A fast parallel algorithm for thinning digital patterns[J]. Communications of the ACM, 1984, 27(3): 236-239.
doi: 10.1145/357994.358023
[22] 常庆贺, 吴敏华, 骆力明. 基于改进ZS细化算法的手写体汉字骨架提取[J]. 计算机应用与软件, 2020, 37(7): 107-113.
CHANG Qinghe, WU Minhua, LUO Liming. Handwritten Chinese character skeleton extraction based on improved ZS thinning algorithm[J]. Computer Applications and Software, 2020, 37(7): 107-113.
[1] 李杨, 彭来湖, 李建强, 刘建廷, 郑秋扬, 胡旭东. 基于深度信念网络的织物疵点检测[J]. 纺织学报, 2023, 44(02): 143-150.
[2] 陈佳, 杨聪聪, 刘军平, 何儒汉, 梁金星. 手绘草图到服装图像的跨域生成[J]. 纺织学报, 2023, 44(01): 171-178.
[3] 王斌, 李敏, 雷承霖, 何儒汉. 基于深度学习的织物疵点检测研究进展[J]. 纺织学报, 2023, 44(01): 219-227.
[4] 安亦锦, 薛文良, 丁亦, 张顺连. 基于图像处理的纺织品耐摩擦色牢度评级[J]. 纺织学报, 2022, 43(12): 131-137.
[5] 陈金广, 李雪, 邵景峰, 马丽丽. 改进YOLOv5网络的轻量级服装目标检测方法[J]. 纺织学报, 2022, 43(10): 155-160.
[6] 张健, 徐凯忆, 赵崧灵, 顾冰菲. 基于二维照片的青年男性颈肩部形态分类与识别[J]. 纺织学报, 2022, 43(05): 143-149.
[7] 江慧, 马彪. 基于服装风格的款式相似度算法[J]. 纺织学报, 2021, 42(11): 129-136.
[8] 杨争妍, 薛文良, 张传雄, 丁亦, 马颜雪. 基于生成式对抗网络的用户下装搭配推荐[J]. 纺织学报, 2021, 42(07): 164-168.
[9] 许倩, 陈敏之. 基于深度学习的服装丝缕平衡性评价系统[J]. 纺织学报, 2019, 40(10): 191-195.
[10] 刘正东, 刘以涵, 王首人. 西装识别的深度学习方法[J]. 纺织学报, 2019, 40(04): 158-164.
[11] 汪珊娜 张华熊 康锋. 基于卷积神经网络的领带花型情感分类[J]. 纺织学报, 2018, 39(08): 117-123.
[12] 何晓昀 韦平 张林 邓斌攸 潘云峰 苏真伟. 基于深度学习的籽棉中异性纤维检测方法[J]. 纺织学报, 2018, 39(06): 131-135.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!