Few-shot anime pose transfer

Authors: Wang, P., Yang, K., Yuan, C., Li, H., Tang, W. and Yang, X.

Journal: Visual Computer

Volume: 40

Issue: 7

Pages: 4635-4646

ISSN: 0178-2789

DOI: 10.1007/s00371-024-03447-7

Abstract:

In this paper, we propose a few-shot method for pose transfer of anime characters—given a source image of an anime character and a target pose, we transfer the pose of the target to the source character. Despite recent advances in pose transfer on real people images, these methods typically require large numbers of training images of different person under different poses to achieve reasonable results. However, anime character images are expensive to obtain they are created with a lot of artistic authoring. To address this, we propose a meta-learning framework for few-shot pose transfer, which can well generalize to an unseen character given just a few examples of the character. Further, we propose fusion residual blocks to align the features of the source and target so that the appearance of the source character can be well transferred to the target pose. Experiments show that our method outperforms leading pose transfer methods, especially when the source characters are not in the training set.

https://eprints.bournemouth.ac.uk/39944/

Source: Scopus

Few-shot anime pose transfer

Authors: Wang, P., Yang, K., Yuan, C., Li, H., Tang, W. and Yang, X.

Journal: VISUAL COMPUTER

Volume: 40

Issue: 7

Pages: 4635-4646

eISSN: 1432-2315

ISSN: 0178-2789

DOI: 10.1007/s00371-024-03447-7

https://eprints.bournemouth.ac.uk/39944/

Source: Web of Science (Lite)

Few-shot anime pose transfer

Authors: Wang, P., Yang, K., Yuan, C., Li, H., Tang, W. and Yang, X.

Journal: The Visual Computer

Volume: 40

Pages: 4635-4646

ISSN: 0178-2789

Abstract:

In this paper, we propose a few-shot method for pose transfer of anime characters—given a source image of an anime character and a target pose, we transfer the pose of the target to the source character. Despite recent advances in pose transfer on real people images, these methods typically require large numbers of training images of different person under different poses to achieve reasonable results. However, anime character images are expensive to obtain they are created with a lot of artistic authoring. To address this, we propose a meta-learning framework for few-shot pose transfer, which can well generalize to an unseen character given just a few examples of the character. Further, we propose fusion residual blocks to align the features of the source and target so that the appearance of the source character can be well transferred to the target pose. Experiments show that our method outperforms leading pose transfer methods, especially when the source characters are not in the training set.

https://eprints.bournemouth.ac.uk/39944/

Source: BURO EPrints