纺织学报 ›› 2023, Vol. 44 ›› Issue (09): 188-196.doi: 10.13475/j.fzxb.20220706301

• 服装工程 • 上一篇    下一篇

面向虚拟现实的着装人体个性化头面部纹理生成技术

陈金文1, 王鑫1, 罗炜豪1, 梅琛楠1, 韦京艳1, 钟跃崎1,2()   

  1. 1.东华大学 纺织学院, 上海 201620
    2.东华大学 纺织面料技术教育部重点实验室, 上海 201620
  • 收稿日期:2022-07-19 修回日期:2023-03-24 出版日期:2023-09-15 发布日期:2023-10-30
  • 通讯作者: 钟跃崎(1972—),男,教授,博士。主要研究方向为数字化纺织服装。E-mail:zhyq@dhu.edu.cn
  • 作者简介:陈金文(1998—),女,硕士生。主要研究方向为基于深度学习的纹理修复。
  • 基金资助:
    上海市自然科学基金项目(21ZR1403000)

VR-oriented personalized head and face texture generation technology of dressed human body

CHEN Jinwen1, WANG Xin1, LUO Weihao1, MEI Chennan1, WEI Jingyan1, ZHONG Yueqi1,2()   

  1. 1. College of Textiles, Donghua University, Shanghai 201620, China
    2. Key Laboratory of Textile Science & Technology, Ministry of Education, Donghua University, Shanghai 201620, China
  • Received:2022-07-19 Revised:2023-03-24 Published:2023-09-15 Online:2023-10-30

摘要:

为提升用户在虚拟现实中的体验感,让相同着装人体可以拥有定制的用户面容,提出通过输入单张用户正面人脸图像,利用DECA神经网络生成三维头部模型和初始纹理。在此基础上,对初始纹理进行光照校正,并通过掩膜提取面部主要区域和边缘肤色,然后对头颈部纹理进行肤色转换和面部纹理2次修复,得到一个完整的头面部纹理,接着将其作为风格图像,将着装人体的原始身体纹理作为内容图像,通过PAMA神经网络进行风格迁移,得到肤色转换后的身体纹理。将经过处理的头面部纹理与身体纹理重新映射在三维模型上,得到具有用户特征的定制化着装人体。实验结果表明,采用该方法修复的头面部纹理能最大限度保留用户面部信息,生成时间仅需1.4 s左右,肤色转换后的面部与身体纹理过渡自然,映射得到的用户着装人体真实度高,可用于各种虚拟现实的应用场景。

关键词: 定制化着装人体, 头面部纹理, 光照校正, 纹理修复, 风格迁移, 虚拟现实

Abstract:

Objective In recent years, the meta-universe concept based on virtual reality (VR) augmented reality and mixed reality technology has developed vigorously. Especially, virtual avatars based on users' personalized customization are widely used in virtual reality. From the perspective of technical composition, the realization of three-dimensional dressing of the human body has become more common. However, it is still a proposition worthy of further exploration to endow the user's head and facial features to the dressed human body and apply them in virtual reality and even metauniverse scenes.

Method By inputting the user's single frontal face image, detailed expression capture and animation (DECA) neural network was adopted to generate the corresponding 3-D head model with initial texture. The initial texture was then corrected by illumination, and the skin color of the main area and edge of the face was extracted by a given mask. The facial texture was in-painted twice to obtain a complete face texture. Using the face texture as a style image and using the original body texture of the dressed human body as a content image, style transfer was carried out by progressive attentional manifold alignment (PAMA) neural network.

Results In the analysis of the influencing factors of facial texture illumination correction, when the convolution kernel was 5×5, the difference of local brightness V1 between the two sides was decreased by 23.1% (Fig. 2), which shows that illumination correction can reduce the brightness difference of the user's facial texture and improve the illumination uniformity of the face with the best scale factor being 5. In the analysis of automatic facial texture restoration, when the weight coefficient was 0.4, the head and neck texture was closest to the facial skin color. When repairing, the combination of two methods was shown to be the best (Fig. 12(c)). The results of texture restoration for different users using the method were presented (Fig. 13), where it can be seen that after the original texture was restored and completed, the texture of the user's head and face model was similar to the neck skin color in the original three-dimensional dressed mannequin with natural boundary fusion, and the original facial information of the user was retained. The comparison between the results of this method and other head and face texture restoration methods based on a single image was shown (Fig. 14). It can be seen that the generated results of this method have the highest similarity with users, and it took the shortest time (8 s per person) under the same configuration. In the analysis of style transfer of body texture, using the PAMA style transfer network could make the skin texture of the body texture in virtual reality similar to that of the user's face texture (Fig. 15). In the application of VR, the rendering result of the user's dressed human body was displayed in various virtual scenes (Fig. 17). It can be seen that the model can present real light and shade effects in different indoor and outdoor lighting environments, demonstrating the practicability of this method in VR scenes.

Conclusion Based on the application of virtual reality, this paper presented a study on the generation of personalized head and face textures of dressed human body with a proposed technique of head and face texture fusion and restoration. By inputting a single frontal face image, a complete head and face texture and corresponding body texture can be generated, which solves the problem of natural fusion of head and face texture and body texture skin color from different sources and can meet the virtual fitting applications in various VR environments.

Key words: users' personalized customization, face texture, illumination correction, texture repairing, neural style transfer, virtual reality

中图分类号: 

  • TS941

图1

头面部纹理融合与修复流程图"

图2

光照校正流程图"

图3

面部纹理提取流程图"

图4

肤色转换效果"

图5

未补全的纹理图"

图6

纹理补全流程图"

图7

身体纹理风格迁移展示图"

图8

不同卷积核的亮度差值"

图9

不同卷积核光照校正后的结果"

图10

不同尺度因子c的亮度差值"

图11

不同权重因子q的肤色转换结果"

图12

不同方法修复纹理的结果"

图13

纹理修复结果展示"

图14

与其它基于单张图像的头面部纹理修复方法的比较"

图15

风格迁移结果"

图16

虚拟现实应用效果展示"

图17

虚拟现实渲染效果展示"

[1] ALLDIECK T, ZANFIR M, SMINCHISESCU C. Photorealistic monocular 3D reconstruction of humans wearing clothing[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE Communications Society, 2022: 1496-1505.
[2] ALEXANDER O, ROGERS M, LAMBETH W, et al. The digital emily project: photoreal facial modeling and animation[C]// SIGGRAPH 09:Special Interest Group on Computer Graphics and Interactive Techniques Conference. New Orleans: Association for Computing Machinery, 2009, 12: 1-15.
[3] JACKSON A S, BULAT A, ARGYRIOU V, et al. Large pose 3D face reconstruction from a single image via direct volumetric CNN regression[C]// 2017 IEEE International Conference on Computer. Vision. Venice: IEEE Communications Society, 2017: 1031-1039.
[4] HU L, HAO L, SAITO S, et al. Avatar digitization from a single image for real-time rendering[C]// SIGGRAPH 17:Special Interest Group on Computer Graphics and Interactive Techniques Conference. Los Angeles: Association for Computing Machinery, 2017: 36(6): 1-14.
[5] LATTAS A, MOSCHOGLOU S, GECER B, et al. AvatarMe: realistically renderable 3D facial reconstruction "in-the-wild"[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle: IEEE Communications Society, 2020: 757-766.
[6] FENG Y, FENG H, BLACK M J, et al. Learning an animatable detailed 3D face model from in the-wild images[J]. ACM Transactions on Graphics, 2020, 40(4): 88.
[7] 何华赟. 数据驱动的三维人体头部重建[D]. 广州: 华南理工大学, 2018:45-48.
HE Huayun. Data-driven 3D human head recons-truction[D]. Guangzhou: South China University of Technology, 2018:45-48.
[8] 李俊, 张明敏, 潘志庚, 等. 人物替换模式的虚拟试衣[J]. 计算机辅助设计与图形学学报, 2015, 27(9): 1694-1700.
LI Jun, ZHANG Mingmin, PAN Zhigeng, et al. Virtual try-on by replacing the person in image[J]. Journal of Computer-Aided Design & Computer Graphics, 2015, 27(9): 1694-1700.
[9] 许艳秋. 面向真人三维虚拟试衣的脸部特征提取与重构[D]. 上海: 上海工程技术大学, 2011:6-9.
XU Yanqiu. Three-dimensional facial features extraction of facing virtual fitting room[D]. Shanghai: Shanghai University of Engineering Science, 2011: 6-9.
[10] 刘志成, 王殿伟, 刘颖, 等. 基于二维伽马函数的光照不均匀图像自适应校正算法[J]. 北京理工大学学报, 2016, 36(2): 191-196,214.
LIU Zhicheng, WANG Dianwei, LIU Ying, et al. Adaptive adjustment algorithm for non-uniform illumination images based on 2D gamma function[J]. Transactions of Beijing Institute of Technology, 2016, 36(2): 191-196, 214.
[11] ZENG Y, LIN Z, LU H, et al. CR-Fill: generative image in-painting with auxiliary contexutal recon-struction[C]// 2021 IEEE International Conference on Computer Vision. Montreal: IEEE Communications Society, 2021: 14164-14173.
[12] TELEA A. An image in-painting technique based on the fast marching method[J]. Journal of Graphics Tools, 2004, 9(1): 25-36.
[13] LUO Xuan, HAN Zhen, YANG Lingkang, et al. Progressive attentional manifold alignment for arbitrary style transfer[C]// The 16th Asian Conference on Computer Vision. Macau: Asian Federation of Computer Vision, 2022: 3206-3222.
[14] Ilya Lysenkov. Avatar SDK demo: create an avatar from a single selfie[EB/OL]. [2022-11-13]. https://webdemo.avatarsdk.com.
[15] FENG Y, CHOUTAS V, BOLKART T, et al. Collaborative regression of expressive bodies using moderation[C]// 2021 International Conference on 3D Vision[s.l.]: IEEE Communications Society, 2021: 792-804.
[1] 姚琳涵, 张颖, 姚岚, 郑晓萍, 魏文达, 刘成霞. 基于多尺度纹理合成的刺绣风格迁移模型[J]. 纺织学报, 2023, 44(09): 84-90.
[2] 张建新, 常卫, 俞利国. 基于球面谐波理论的染色品色差检测光照校正[J]. 纺织学报, 2012, 33(6): 53-58.
[3] 瞿畅;王君泽;李波. 纬编针织物三维仿真系统的开发[J]. 纺织学报, 2011, 32(4): 57-61.
[4] 瞿畅;王君泽;李波. 纬编针织物基本组织的计算机三维模拟[J]. 纺织学报, 2009, 30(11): 136-140.
[5] 王湛;张辉;赵玉玲. 三维服装CAD中建模与展示技术[J]. 纺织学报, 2008, 29(4): 91-94.
[6] 吴志明;方薇. 网络游戏中的服饰文化[J]. 纺织学报, 2007, 28(12): 99-102.
[7] 陆永良;李汝勤;胡金莲. 小型桌面虚拟现实系统在三维服装设计中的应用[J]. 纺织学报, 2005, 26(2): 141-143.
[8] 陆永良;李汝勤;胡金莲. 虚拟环境中三维衣片轮廓线的设计与演示[J]. 纺织学报, 2004, 25(01): 59-61.
[9] 汪新宏;诸葛振荣. 基于虚拟现实的织物面料展示系统[J]. 纺织学报, 2003, 24(01): 13-15.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!