Semantic portrait color transfer with internet images

This data was imported from Scopus:

Authors: Yang, Y., Zhao, H., You, L., Tu, R., Wu, X. and Jin, X.

http://eprints.bournemouth.ac.uk/33100/

Journal: Multimedia Tools and Applications

Volume: 76

Issue: 1

Pages: 523-541

eISSN: 1573-7721

ISSN: 1380-7501

DOI: 10.1007/s11042-015-3063-x

© 2015, Springer Science+Business Media New York. We present a novel color transfer method for portraits by exploring their high-level semantic information. First, a database is set up which consists of a collection of portrait images download from the Internet, and each of them is manually segmented using image matting as a preprocessing step. Second, we search the database using Face++ to find the images with similar poses to a given source portrait image, and choose one satisfactory image from the results as the target. Third, we extract portrait foregrounds from both source and target images. Then, the system extracts the semantic information, such as faces, eyes, eyebrows, lips, teeth, etc., from the extracted foreground of the source using image matting algorithms. After that, we perform color transfer between corresponding parts with the same semantic information. We get the final transferred result by seamlessly compositing different parts together using alpha blending. Experimental results show that our semantics-driven approach can generate better color transfer results for portraits than previous methods and provide users a new means to retouch their portraits.

This data was imported from Web of Science (Lite):

Authors: Yang, Y., Zhao, H., You, L., Tu, R., Wu, X. and Jin, X.

http://eprints.bournemouth.ac.uk/33100/

Journal: MULTIMEDIA TOOLS AND APPLICATIONS

Volume: 76

Issue: 1

Pages: 523-541

eISSN: 1573-7721

ISSN: 1380-7501

DOI: 10.1007/s11042-015-3063-x

The data on this page was last updated at 05:10 on February 17, 2020.