Journal of Textile Research ›› 2023, Vol. 44 ›› Issue (01): 171-178.doi: 10.13475/j.fzxb.20211104908

• Apparel Engineering • Previous Articles     Next Articles

Cross-domain generation for transferring hand-drawn sketches to garment images

CHEN Jia1,2, YANG Congcong1, LIU Junping1(), HE Ruhan1,2, LIANG Jinxing1,2   

  1. 1. School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei 430200, China
    2. Engineering Research Center of Hubei Province for Clothing Information, Wuhan, Hubei 430200, China
  • Received:2021-11-09 Revised:2022-09-21 Online:2023-01-15 Published:2023-02-16

Abstract:

Objective Garment image synthesis is an important part of the garment design and manufacturing process, which uses artificial intelligence technology to automatically generate realistic garment images. Garment design relies heavily on the subjective will of the designer, which often needs to be manually achieved by designers. However, this process is time-consuming and quite inefficient. In the context of artificial intelligence, garment image synthesis can significantly improve efficiency by automatically generating garment images. In addition, it has a wide range of applications in virtual try-on, fashion image manipulation and fashion presentation. Therefore, garment image synthesis has received a lot of attention.
Method The garment sketch was guided to automatically generate the corresponding garment image by entering the garment attributes. A garment image generation method based on hand-drawn sketches was proposed, namely AGGAN. Generative adversarial networks with attention mechanism was adopted to learn garment sketches and garment image data to obtain AdaIN parameters after One-hot encoding of garment attributes through the attribute incorporation module, which are incorporated into the model, and the model was trained to learn the correspondence between garment images and their visual attributes, thus can generate corresponding garment images under the guidance of given garment attributes.
Results AGGAN was qualitatively compared with some existing image generation methods (Fig.2). By comparing with all baselines, the AGGAN proposed not only generates garment images with multiple colors, but also generates images closer to the real situation in terms of visual effects. In addition, IS (inception score), FID (fréchet inception distance), and MOS (mean opinion score) was useds for further quantitatively evaluating the model. The IS value of the garment images generated by the method prosposed is 1.253 (Tab.1), which is 13.8% higher than CycleGAN (cycle-consistent generative adversarial networks ) value, and higher than the values from using other methods. The FID value is 139.634, which is 26.2% lower than CycleGAN, and lower than other methods. In addition to the above two evaluation methods, MOS was adopted to evaluate the quality of garment images generated by each method, the MOS score obtained by the method prosposed is 4.352, which is higher than other image generation methods. In order to control the generation of garment images more flexibly, experiments was conducted on attribute-guided garment image synthesis. The garment sketch was controlled by garment attributes to synthesize the corresponding garment image, and the generated garment image has obvious changes in the sleeve length part, which does not seem to be particularly incongruous (Fig.3). The effect of the color attribute on the generated garment images was also explored. Several common color attributes was chosen in the experiments, and it can be seen that AGGAN can generated almost any corresponding color and high-fidelity garment images under the control of color attributes (Fig.4). Texture is also the most intuitive and the main visual feature of the garment image, and several texture attributes was selected in the experiments (Fig.5). From the figure it can be seen that the generation results are more obvious, basically the required texture can be generated, although further improvement is necessary in terms of realism.
Conclusion The research constructed a garment image generation model based on hand-drawn sketches through the attribute incorporation module, attention mechanism and CycleGAN. Combining the advantages of generative adversarial networks and conditional image generation methods, it took garment attributes as conditions to improve the controllability of the garment image generation process, which helps garment designers to achieve automated garment image synthesis. After a series of experiments, the feasibility and effectiveness of the text method were proved. The method proposed provides new ideas for computer-aided garment design. Some improvements should to be made, for example, the generated garment images cannot generate texture attributes effectively, and there are fewer garment attributes studied.

Key words: fashion design, hand-drawn sketch, deep learning, image generation, generative adversarial network, attention mechanism

CLC Number: 

  • TS942.8

Fig.1

Forward AGGAN framework"

Fig.2

Comparison of results of AGGAN and other methods for generating garment images. (a) Input sketch 1; (b) Input sketch 2; (c) Input sketch 3; (d) Input sketch 4; (e) Input sketch 5"

Tab.1

Results of comparing AGGAN with other methods"

图像生成方法 IS值 FID值 MOS值/分
CycleGAN 1.101 189.078 3.021
MUNIT 1.178 211.793 3.222
USPS 1.142 147.486 3.432
AGGAN 1.253 139.634 4.352

Fig.3

Generation of results for sleeve length attributes control. (a) Input sketch 1; (b) Input sketch 2; (c) Input sketch 3; (d) Input sketch 4; (e) Input sketch 5"

Fig.4

Generated results for color attributes control. (a) Input sketch 1; (b) Input sketch 2; (c) Input sketch 3; (d) Input sketch 4; (e) Input sketch 5"

Fig.5

Generation of results for texture attributes control. (a) Input sketch 1; (b) Input sketch 2; (c) Input sketch 3; (d) Input sketch 4; (e) Input sketch 5"

[1] CHEN T, CHENG M M, TAN P, et al. Sketch2photo: internet image montage[J]. ACM Transactions on Graphics (TOG), 2009, 28(5): 1-10.
[2] EITZ M, RICHTER R, HILDEBRAND K, et al. Photosketcher: interactive sketch-based image synthesis[J]. IEEE Computer Graphics and Applications, 2011, 31(6): 56-66.
doi: 10.1109/MCG.2011.67 pmid: 24808259
[3] CHEN W, JAMES H. Sketchygan: towards diverse and realistic sketch to image synthesis[C] // 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City: IEEE Computer Society 2018: 9416-9425.
[4] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
doi: 10.1145/3422622
[5] ZHU J Y. Unpaired image-to-image translation using cycle-consistent adversarial networks[C] // PARK T, ISOLA P, EFROS A A, et al. 2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE Computer Society, 2017: 2223-2232.
[6] HUANG X, BELONGIE S. Arbitrary style transfer in real-time with adaptive instance normalization[C] // 2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE Computer Society, 2017: 1510-1519.
[7] PING Q. Fashion-attgan: attribute-aware fashion editing with multi-objective GAN[C]// WU B, DING W Y, YUAN J B, et al. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Long Beach: IEEE Computer Society, 2019: 323-325.
[8] HEUSEL M. Gans trained by a two time-scale update rule converge to a local nash equilibrium[C] // RAMSAUER H, UNTERTHINER T, NESSLER B, et al. Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017: 6629-6640.
[9] WANG T C. High-resolution image synthesis and semantic manipulation with conditional gans[C] // LIU M Y, ZHU J Y, TAO A, et al. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City: IEEE Computer Society, 2018: 8798-8807.
[10] SZEGEDY C. Rethinking the inception architecture for computer vision[C] // VANHOUCKE V, IOFFE S, SHLENS J, et al. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE Computer Society, 2016: 2818-2826.
[11] VAN ERVEN T, HARREMOS P. RÉnyi divergence and kullback-leibler divergence[J]. IEEE Transactions on Information Theory, 2014, 60(7): 3797-3820.
doi: 10.1109/TIT.2014.2320500
[12] HUANG X. Multimodal unsupervised image-to-image translation[C] // LIU M Y, BELONGIE S, KAUTZ J, et al. Proceedings of 15th European Conference on Computer Vision (ECCV). Cham: Springer International Publishing, 2018: 172-189.
[13] LIU R. Unsupervised sketch to photo synthesis[C] // YU Q, YU S X. Proceedings of 16th European Conference on Computer Vision (ECCV). Cham: Springer International Publishing, 2022: 36-52.
[1] WANG Bin, LI Min, LEI Chenglin, HE Ruhan. Research progress in fabric defect detection based on deep learning [J]. Journal of Textile Research, 2023, 44(01): 219-227.
[2] AN Yijin, XUE Wenliang, DING Yi, ZHANG Shunlian. Evaluation of textile color rubbing fastness based on image processing [J]. Journal of Textile Research, 2022, 43(12): 131-137.
[3] GU Meihua, LIU Jie, LI Liyao, CUI Lin. Clothing image segmentation method based on feature learning and attention mechanism [J]. Journal of Textile Research, 2022, 43(11): 163-171.
[4] CHEN Jinguang, LI Xue, SHAO Jingfeng, MA Lili. Lightweight clothing detection method based on an improved YOLOv5 network [J]. Journal of Textile Research, 2022, 43(10): 155-160.
[5] WANG Di, KE Ying, WANG Hongfu. Parametric fashion design based on Voronoi graphics [J]. Journal of Textile Research, 2021, 42(12): 131-137.
[6] JIANG Hui, MA Biao. Style similarity algorithm based on clothing style [J]. Journal of Textile Research, 2021, 42(11): 129-136.
[7] YANG Zhengyan, XUE Wenliang, ZHANG Chuanxiong, DING Yi, MA Yanxue. Recommendations for user's bottoms matching based on generative adversarial networks [J]. Journal of Textile Research, 2021, 42(07): 164-168.
[8] XU Qian, CHEN Minzhi. Garment grain balance evaluation system based on deep learning [J]. Journal of Textile Research, 2019, 40(10): 191-195.
[9] LIU Zhengdong, LIU Yihan, WANG Shouren. Depth learning method for suit detection in images [J]. Journal of Textile Research, 2019, 40(04): 158-164.
[10] . Design and application of three-dimensional parametric technology in construction of new forms of modern clothing [J]. Journal of Textile Research, 2018, 39(12): 118-123.
[11] . Integrating of soft intelligent textile and functional fiber [J]. JOURNAL OF TEXTILE RESEARCH, 2018, 39(05): 160-169.
[12] . Innovative application of men's plus fours in Chinese modern fashion design [J]. Journal of Textile Research, 2015, 36(04): 124-127.
[13] LIU Li-Xian, GUO Jian-南, ZENG Li.  Study on made-to-order designing mode [J]. JOURNAL OF TEXTILE RESEARCH, 2012, 33(2): 104-108.
[14] . Research on evolution of women′s dominant status and transformation of costume culture [J]. JOURNAL OF TEXTILE RESEARCH, 2012, 33(12): 70-74.
[15] CHEN Bin;CAO Xiaojie. Design style of modern men’s wear [J]. JOURNAL OF TEXTILE RESEARCH, 2010, 31(10): 116-120.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!