Semantic portrait color transfer with internet images

Authors: Yang, Y., Zhao, H., You, L., Tu, R., Wu, X. and Jin, X.

Journal: Multimedia Tools and Applications

Volume: 76

Issue: 1

Pages: 523-541

eISSN: 1573-7721

ISSN: 1380-7501

DOI: 10.1007/s11042-015-3063-x

Abstract:

We present a novel color transfer method for portraits by exploring their high-level semantic information. First, a database is set up which consists of a collection of portrait images download from the Internet, and each of them is manually segmented using image matting as a preprocessing step. Second, we search the database using Face++ to find the images with similar poses to a given source portrait image, and choose one satisfactory image from the results as the target. Third, we extract portrait foregrounds from both source and target images. Then, the system extracts the semantic information, such as faces, eyes, eyebrows, lips, teeth, etc., from the extracted foreground of the source using image matting algorithms. After that, we perform color transfer between corresponding parts with the same semantic information. We get the final transferred result by seamlessly compositing different parts together using alpha blending. Experimental results show that our semantics-driven approach can generate better color transfer results for portraits than previous methods and provide users a new means to retouch their portraits.

https://eprints.bournemouth.ac.uk/33100/

Source: Scopus

Semantic portrait color transfer with internet images

Authors: Yang, Y., Zhao, H., You, L., Tu, R., Wu, X. and Jin, X.

Journal: MULTIMEDIA TOOLS AND APPLICATIONS

Volume: 76

Issue: 1

Pages: 523-541

eISSN: 1573-7721

ISSN: 1380-7501

DOI: 10.1007/s11042-015-3063-x

https://eprints.bournemouth.ac.uk/33100/

Source: Web of Science (Lite)

Semantic portrait color transfer with internet images

Authors: Yang, Y., Zhao, H., You, L., Tu, R., Wu, X. and Jin, X.

Journal: Multimedia Tools and Applications

Volume: 76

Issue: 1

Pages: 523-541

eISSN: 1573-7721

ISSN: 1380-7501

DOI: 10.1007/s11042-015-3063-x

Abstract:

© 2015, Springer Science+Business Media New York. We present a novel color transfer method for portraits by exploring their high-level semantic information. First, a database is set up which consists of a collection of portrait images download from the Internet, and each of them is manually segmented using image matting as a preprocessing step. Second, we search the database using Face++ to find the images with similar poses to a given source portrait image, and choose one satisfactory image from the results as the target. Third, we extract portrait foregrounds from both source and target images. Then, the system extracts the semantic information, such as faces, eyes, eyebrows, lips, teeth, etc., from the extracted foreground of the source using image matting algorithms. After that, we perform color transfer between corresponding parts with the same semantic information. We get the final transferred result by seamlessly compositing different parts together using alpha blending. Experimental results show that our semantics-driven approach can generate better color transfer results for portraits than previous methods and provide users a new means to retouch their portraits.

https://eprints.bournemouth.ac.uk/33100/

Source: Manual

Preferred by: Lihua You

Semantic portrait color transfer with internet images

Authors: Yang, Y., Zhao, H., You, L., Tu, R., Wu, X. and Jin, X.

Journal: Multimedia Tools and Applications

Volume: 76

Issue: 1

Pages: 523-541

ISSN: 1380-7501

Abstract:

We present a novel color transfer method for portraits by exploring their high-level semantic information. First, a database is set up which consists of a collection of portrait images download from the Internet, and each of them is manually segmented using image matting as a preprocessing step. Second, we search the database using Face++ to find the images with similar poses to a given source portrait image, and choose one satisfactory image from the results as the target. Third, we extract portrait foregrounds from both source and target images. Then, the system extracts the semantic information, such as faces, eyes, eyebrows, lips, teeth, etc., from the extracted foreground of the source using image matting algorithms. After that, we perform color transfer between corresponding parts with the same semantic information. We get the final transferred result by seamlessly compositing different parts together using alpha blending. Experimental results show that our semantics-driven approach can generate better color transfer results for portraits than previous methods and provide users a new means to retouch their portraits.

https://eprints.bournemouth.ac.uk/33100/

Source: BURO EPrints