Facial expression animation through action units transfer in latent space

Authors: Fan, Y., Tian, F., Tan, X. and Cheng, H.

Journal: Computer Animation and Virtual Worlds

Volume: 31

Issue: 4-5

eISSN: 1546-427X

ISSN: 1546-4261

DOI: 10.1002/cav.1946

Abstract:

Automatic animation synthesis has attracted much attention from the community. As most existing methods take a small number of discrete expressions rather than continuous expressions, their integrity and reality of the facial expressions is often compromised. In addition, the easy manipulation with simple inputs and unsupervised processing, although being important to the automatic facial expression animation applications, is relatively less concerned. To address these issues, we propose an unsupervised continuous automatic facial expression animation approach through action units (AU) transfer in the latent space of generative adversarial networks. The expression descriptor which is depicted with AU vector is transferred into the input image without the need of labeled pairs of images and even without their expressions and further network training. We also propose a new approach to quickly generate input image's latent code and cluster the boundaries of different AU attributes with their latent codes. Two latent code operators, vector addition and continuous interpolation, are leveraged for facial expression animation simulating align with the boundaries in the latent space. Experiments have shown that the proposed approach is effective on facial expression translation and animation synthesis.

http://eprints.bournemouth.ac.uk/34628/

Source: Scopus

Facial expression animation through action units transfer in latent space

Authors: Fan, Y., Tian, F., Tan, X. and Cheng, H.

Journal: COMPUTER ANIMATION AND VIRTUAL WORLDS

Volume: 31

Issue: 4-5

eISSN: 1546-427X

ISSN: 1546-4261

DOI: 10.1002/cav.1946

http://eprints.bournemouth.ac.uk/34628/

Source: Web of Science (Lite)

Facial expression animation through action units transfer in latent space

Authors: Fan, Y., Tian, F., Tan, X. and Cheng, H.

Journal: Computer Animation and Virtual Worlds

eISSN: 1546-427X

ISSN: 1546-4261

DOI: 10.1002/cav.1946

Abstract:

© 2020 John Wiley & Sons, Ltd. Automatic animation synthesis has attracted much attention from the community. As most existing methods take a small number of discrete expressions rather than continuous expressions, their integrity and reality of the facial expressions is often compromised. In addition, the easy manipulation with simple inputs and unsupervised processing, although being important to the automatic facial expression animation applications, is relatively less concerned. To address these issues, we propose an unsupervised continuous automatic facial expression animation approach through action units (AU) transfer in the latent space of generative adversarial networks. The expression descriptor which is depicted with AU vector is transferred into the input image without the need of labeled pairs of images and even without their expressions and further network training. We also propose a new approach to quickly generate input image's latent code and cluster the boundaries of different AU attributes with their latent codes. Two latent code operators, vector addition and continuous interpolation, are leveraged for facial expression animation simulating align with the boundaries in the latent space. Experiments have shown that the proposed approach is effective on facial expression translation and animation synthesis.

http://eprints.bournemouth.ac.uk/34628/

Source: Manual

Preferred by: Feng Tian