Authors:
Youngki Kwon
1
;
Soomin Kim
1
;
Donggeun Yoo
2
and
Sung-Eui Yoon
1
Affiliations:
1
School of Computing, KAIST, Daejeon and Republic of Korea
;
2
Lunit Inc, Seoul and Republic of Korea
Keyword(s):
Generative Adversarial Networks, Image Conditional Image Generation, Cloth Image Generation, Coarse-to- Fine
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Image Enhancement and Restoration
;
Image Formation and Preprocessing
;
Image Generation Pipeline: Algorithms and Techniques
Abstract:
Clothing image generation is a task of generating clothing product images from input fashion images of people dressed. Results of existing GAN based methods often contain visual artifact with the global consistency issue. To solve this issue, we split the difficult single image generation process into relatively easy multiple stages for image generation process. We thus propose a coarse-to-fine strategy for the image-conditional image generation model, with a multi-stage network training method, called rough-to-detail training. We incrementally add a decoder block for each stage that progressively configures an intermediate target image, to make the generator network appropriate for rough-to-detail training. With this coarse-to-fine process, our model can generate from small size images with rough structures to large size images with details. To validate our model, we perform various quantitative comparisons and human perception study on the LookBook dataset. Compared to other condit
ional GAN methods, our model can create visually pleasing 256 × 256 clothing images, while keeping the global structure and containing details of target images.
(More)