A Novel Hairstyle Transfer Method using Contextual Transformer Blocks
Authors: Huang, D., Li, Z. and Liu, J.
Journal: ACM International Conference Proceeding Series
Pages: 421-425
DOI: 10.1145/3654446.3654521
Abstract:Hair editing is a research highlight of computer vision. Due to its complex structure, synthesizing and editing realistic fine-grained hair is a very challenging task. In order to improve the visual quality and realism of generated images, we propose a hairstyle transfer method based on generative adversarial networks by using the Contextual Transformer Blocks. Firstly, we design a Hair Segmentation Module (HSM) that embeds Contextual Transformer Blocks in convolutional operations to reconstruct the segmentation architecture, further enhancing visual representation and making images produce more refined hairstyle structures and hair color quality. Secondly, we propose a Hair Mapper Module (HMM) that maps hair color information to higher-level semantic information by changing the direction of the mapper, to achieve color consistency with the reference image. Finally, we introduce new hair color loss function to manipulate the hair in a decoupled manner. Numerous experiments on the CelebA-HQ dataset have shown that our method can generate images with higher consistency in hairstyle and hair color. Compared with state-of-the-art methods, our method has best comprehensive performance.
Source: Scopus