FrseGAN: Free-style editable facial makeup transfer based on GAN combined with transformer

Authors: Xu, W., Wang, P. and Yang, X.

Journal: Computer Animation and Virtual Worlds

Volume: 35

Issue: 3

eISSN: 1546-427X

ISSN: 1546-4261

DOI: 10.1002/cav.2235

Abstract:

Makeup in real life varies widely and is personalized, presenting a key challenge in makeup transfer. Most previous makeup transfer techniques divide the face into distinct regions for color transfer, frequently neglecting details like eyeshadow and facial contours. Given the successful advancements of Transformers in various visual tasks, we believe that this technology holds large potential in addressing pose, expression, and occlusion differences. To explore this, we propose novel pipeline which combines well-designed Convolutional Neural Network with Transformer to leverage the advantages of both networks for high-quality facial makeup transfer. This enables hierarchical extraction of both local and global facial features, facilitating the encoding of facial attributes into pyramid feature maps. Furthermore, a Low-Frequency Information Fusion Module is proposed to address the problem of large pose and expression variations which exist between the source and reference faces by extracting makeup features from the reference and adapting them to the source. Experiments demonstrate that our method produces makeup faces that are visually more detailed and realistic, yielding superior results.

https://eprints.bournemouth.ac.uk/40075/

Source: Scopus

FrseGAN: Free-style editable facial makeup transfer based on GAN combined with transformer

Authors: Xu, W., Wang, P. and Yang, X.

Journal: COMPUTER ANIMATION AND VIRTUAL WORLDS

Volume: 35

Issue: 3

eISSN: 1546-427X

ISSN: 1546-4261

DOI: 10.1002/cav.2235

https://eprints.bournemouth.ac.uk/40075/

Source: Web of Science (Lite)

FrseGAN: Free-style editable facial makeup transfer based on GAN combined with transformer

Authors: Xu, W., Wang, P. and Yang, X.

Journal: Computer Animation and Virtual Worlds

Volume: 35

Issue: 3

ISSN: 1546-4261

Abstract:

Makeup in real life varies widely and is personalized, presenting a key challenge in makeup transfer. Most previous makeup transfer techniques divide the face into distinct regions for color transfer, frequently neglecting details like eyeshadow and facial contours. Given the successful advancements of Transformers in various visual tasks, we believe that this technology holds large potential in addressing pose, expression, and occlusion differences. To explore this, we propose novel pipeline which combines well-designed Convolutional Neural Network with Transformer to leverage the advantages of both networks for high-quality facial makeup transfer. This enables hierarchical extraction of both local and global facial features, facilitating the encoding of facial attributes into pyramid feature maps. Furthermore, a Low-Frequency Information Fusion Module is proposed to address the problem of large pose and expression variations which exist between the source and reference faces by extracting makeup features from the reference and adapting them to the source. Experiments demonstrate that our method produces makeup faces that are visually more detailed and realistic, yielding superior results.

https://eprints.bournemouth.ac.uk/40075/

Source: BURO EPrints