Journal of Textile Research ›› 2023, Vol. 44 ›› Issue (09): 188-196.doi: 10.13475/j.fzxb.20220706301

• Apparel Engineering • Previous Articles     Next Articles

VR-oriented personalized head and face texture generation technology of dressed human body

CHEN Jinwen1, WANG Xin1, LUO Weihao1, MEI Chennan1, WEI Jingyan1, ZHONG Yueqi1,2()   

  1. 1. College of Textiles, Donghua University, Shanghai 201620, China
    2. Key Laboratory of Textile Science & Technology, Ministry of Education, Donghua University, Shanghai 201620, China
  • Received:2022-07-19 Revised:2023-03-24 Online:2023-09-15 Published:2023-10-30

Abstract:

Objective In recent years, the meta-universe concept based on virtual reality (VR) augmented reality and mixed reality technology has developed vigorously. Especially, virtual avatars based on users' personalized customization are widely used in virtual reality. From the perspective of technical composition, the realization of three-dimensional dressing of the human body has become more common. However, it is still a proposition worthy of further exploration to endow the user's head and facial features to the dressed human body and apply them in virtual reality and even metauniverse scenes.

Method By inputting the user's single frontal face image, detailed expression capture and animation (DECA) neural network was adopted to generate the corresponding 3-D head model with initial texture. The initial texture was then corrected by illumination, and the skin color of the main area and edge of the face was extracted by a given mask. The facial texture was in-painted twice to obtain a complete face texture. Using the face texture as a style image and using the original body texture of the dressed human body as a content image, style transfer was carried out by progressive attentional manifold alignment (PAMA) neural network.

Results In the analysis of the influencing factors of facial texture illumination correction, when the convolution kernel was 5×5, the difference of local brightness V1 between the two sides was decreased by 23.1% (Fig. 2), which shows that illumination correction can reduce the brightness difference of the user's facial texture and improve the illumination uniformity of the face with the best scale factor being 5. In the analysis of automatic facial texture restoration, when the weight coefficient was 0.4, the head and neck texture was closest to the facial skin color. When repairing, the combination of two methods was shown to be the best (Fig. 12(c)). The results of texture restoration for different users using the method were presented (Fig. 13), where it can be seen that after the original texture was restored and completed, the texture of the user's head and face model was similar to the neck skin color in the original three-dimensional dressed mannequin with natural boundary fusion, and the original facial information of the user was retained. The comparison between the results of this method and other head and face texture restoration methods based on a single image was shown (Fig. 14). It can be seen that the generated results of this method have the highest similarity with users, and it took the shortest time (8 s per person) under the same configuration. In the analysis of style transfer of body texture, using the PAMA style transfer network could make the skin texture of the body texture in virtual reality similar to that of the user's face texture (Fig. 15). In the application of VR, the rendering result of the user's dressed human body was displayed in various virtual scenes (Fig. 17). It can be seen that the model can present real light and shade effects in different indoor and outdoor lighting environments, demonstrating the practicability of this method in VR scenes.

Conclusion Based on the application of virtual reality, this paper presented a study on the generation of personalized head and face textures of dressed human body with a proposed technique of head and face texture fusion and restoration. By inputting a single frontal face image, a complete head and face texture and corresponding body texture can be generated, which solves the problem of natural fusion of head and face texture and body texture skin color from different sources and can meet the virtual fitting applications in various VR environments.

Key words: users' personalized customization, face texture, illumination correction, texture repairing, neural style transfer, virtual reality

CLC Number: 

  • TS941

Fig. 1

Flow chart of facial texture fusion and repairing. (a) Image input by user; (b) Initial texture U0; (c) Head and neck template texture Um; (d) Texture Ure generated by proposed method; (e) Texture mapping rendering results"

Fig. 2

Flow chart of illumination correction"

Fig. 3

Flow chart of facial texture extraction"

Fig. 4

Skin color conversion effect. (a) Solid color image Im; (b) Head and neck template texture Um; (c) Head and neck texture Uhead"

Fig. 5

Incomplete texture map"

Fig. 6

Flow chart of texture completion"

Fig. 7

Display diagram of body texture style migration"

Fig. 8

Brightness difference of different convolution kernels"

Fig. 9

Results after illumination correction of different convolution kernels. (a) Convolution kernel of 5×5; (b) Convolution kernel of 15×15; (c) Convolution kernel of 25×25; (d) Convolution kernel of 35×35; (e) Circular mask with diameter pixel of 25; (f) Mask extracted face part"

Fig. 10

Brightness difference of different scale factors c"

Fig. 11

Skin color conversion results of different weight factors. (a) q=0.1; (b) q=0.2;(c) q=0.3; (d) q=0.4; (e) q=0.5; (f) Extracted texture Utex"

Fig. 12

Results of texture repairing by different methods. (a) Assisted repairing; (b) Rapid repairing; (c) Reparing by two methods; (d) Finally completed texture Ure"

Fig. 13

Texture repairing results. (a) User's frontal view; (b) Initial texture map; (c) Results of proposed method; (d) Initial texture rendering result; (e) Rendering results of proposed method"

Fig. 14

Comparison with other head and face texture repairing methods based on single image. (a)Input image; (b) Results of Avatar SDK; (c) Results of BFM; (d) Results of proposed method"

Fig. 15

Style migration results. (a) Original body texture; (b) Body texture of style transfer; (c) Body texture converted by weight coefficient method"

Fig. 16

Virtual reality application effect. (a) User's frontal view; (b) Dressed original human body; (c) User's dressed human body; (d) Short hairstyle of user's dressed human body; (e) Long hairstyle of user's dressed human body"

Fig. 17

VR rendering effect. (a) VR scene 1; (b) VR scene 2; (c) VR scence 3"

[1] ALLDIECK T, ZANFIR M, SMINCHISESCU C. Photorealistic monocular 3D reconstruction of humans wearing clothing[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE Communications Society, 2022: 1496-1505.
[2] ALEXANDER O, ROGERS M, LAMBETH W, et al. The digital emily project: photoreal facial modeling and animation[C]// SIGGRAPH 09:Special Interest Group on Computer Graphics and Interactive Techniques Conference. New Orleans: Association for Computing Machinery, 2009, 12: 1-15.
[3] JACKSON A S, BULAT A, ARGYRIOU V, et al. Large pose 3D face reconstruction from a single image via direct volumetric CNN regression[C]// 2017 IEEE International Conference on Computer. Vision. Venice: IEEE Communications Society, 2017: 1031-1039.
[4] HU L, HAO L, SAITO S, et al. Avatar digitization from a single image for real-time rendering[C]// SIGGRAPH 17:Special Interest Group on Computer Graphics and Interactive Techniques Conference. Los Angeles: Association for Computing Machinery, 2017: 36(6): 1-14.
[5] LATTAS A, MOSCHOGLOU S, GECER B, et al. AvatarMe: realistically renderable 3D facial reconstruction "in-the-wild"[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle: IEEE Communications Society, 2020: 757-766.
[6] FENG Y, FENG H, BLACK M J, et al. Learning an animatable detailed 3D face model from in the-wild images[J]. ACM Transactions on Graphics, 2020, 40(4): 88.
[7] 何华赟. 数据驱动的三维人体头部重建[D]. 广州: 华南理工大学, 2018:45-48.
HE Huayun. Data-driven 3D human head recons-truction[D]. Guangzhou: South China University of Technology, 2018:45-48.
[8] 李俊, 张明敏, 潘志庚, 等. 人物替换模式的虚拟试衣[J]. 计算机辅助设计与图形学学报, 2015, 27(9): 1694-1700.
LI Jun, ZHANG Mingmin, PAN Zhigeng, et al. Virtual try-on by replacing the person in image[J]. Journal of Computer-Aided Design & Computer Graphics, 2015, 27(9): 1694-1700.
[9] 许艳秋. 面向真人三维虚拟试衣的脸部特征提取与重构[D]. 上海: 上海工程技术大学, 2011:6-9.
XU Yanqiu. Three-dimensional facial features extraction of facing virtual fitting room[D]. Shanghai: Shanghai University of Engineering Science, 2011: 6-9.
[10] 刘志成, 王殿伟, 刘颖, 等. 基于二维伽马函数的光照不均匀图像自适应校正算法[J]. 北京理工大学学报, 2016, 36(2): 191-196,214.
LIU Zhicheng, WANG Dianwei, LIU Ying, et al. Adaptive adjustment algorithm for non-uniform illumination images based on 2D gamma function[J]. Transactions of Beijing Institute of Technology, 2016, 36(2): 191-196, 214.
[11] ZENG Y, LIN Z, LU H, et al. CR-Fill: generative image in-painting with auxiliary contexutal recon-struction[C]// 2021 IEEE International Conference on Computer Vision. Montreal: IEEE Communications Society, 2021: 14164-14173.
[12] TELEA A. An image in-painting technique based on the fast marching method[J]. Journal of Graphics Tools, 2004, 9(1): 25-36.
[13] LUO Xuan, HAN Zhen, YANG Lingkang, et al. Progressive attentional manifold alignment for arbitrary style transfer[C]// The 16th Asian Conference on Computer Vision. Macau: Asian Federation of Computer Vision, 2022: 3206-3222.
[14] Ilya Lysenkov. Avatar SDK demo: create an avatar from a single selfie[EB/OL]. [2022-11-13]. https://webdemo.avatarsdk.com.
[15] FENG Y, CHOUTAS V, BOLKART T, et al. Collaborative regression of expressive bodies using moderation[C]// 2021 International Conference on 3D Vision[s.l.]: IEEE Communications Society, 2021: 792-804.
[1] . The Development of the 3D Simulation System of Weft Knitted Fabric [J]. JOURNAL OF TEXTILE RESEARCH, 2011, 32(4): 57-61.
[2] WANG Zhan;ZHANG Hui;ZHAO Yuling. Modeling and exhibition technique in 3-D garment CAD [J]. JOURNAL OF TEXTILE RESEARCH, 2008, 29(4): 91-94.
[3] WU Zhiming;FANG Wei . Garment culture in online game [J]. JOURNAL OF TEXTILE RESEARCH, 2007, 28(12): 99-102.
[4] Application of small desktop VR system for D garment design. Application of small desktop VR system for 3D garment design [J]. JOURNAL OF TEXTILE RESEARCH, 2005, 26(2): 141-143.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!