Struct2Hair: A hair shape descriptor for hairstyle modeling

Authors: Zhang, W., Nie, Y., Guo, S., Chang, J., Zhang, J. and Tong, R.

Journal: Computer Animation and Virtual Worlds

Volume: 34

Issue: 5

eISSN: 1546-427X

ISSN: 1546-4261

DOI: 10.1002/cav.2128

Abstract:

In recent years, it becomes possible to extract hair information for hair reconstruction from multiple cameras or monocular camera. Using a single image as the input avoids the high cost setups and complex calibration compared to multiviewed reconstruction. Taking advantage of an extendible hairstyle database, this paper introduced Struct2Hair, a novel single-viewed hair modelling approach by extracting hair shape descriptor (HSD). The HSD is defined as the fundamental structure-aware feature, which is a combination of critical shapes in a hairstyle. A complete dataset of critical hair shapes is constructed from a known database of three-dimensional (3D) hair models. We first analyze the input two-dimensional (2D) image to extract the orientation information and 2D hair sketch automatically. The extracted information is then used to retrieve the corresponding critical shapes with optimization to build the robust HSD. Finally, the HSD constructs a weighted 3D hair orientation field to guide full-head hair model generation. Our method can preserve local geometric features of hair and retain the whole shape of the hairstyle globally owing to the HSD, which will benefit further hair editing and stylization.

https://eprints.bournemouth.ac.uk/37986/

Source: Scopus

Struct2Hair: A hair shape descriptor for hairstyle modeling

Authors: Zhang, W., Nie, Y., Guo, S., Chang, J., Zhang, J. and Tong, R.

Journal: COMPUTER ANIMATION AND VIRTUAL WORLDS

Volume: 34

Issue: 5

eISSN: 1546-427X

ISSN: 1546-4261

DOI: 10.1002/cav.2128

https://eprints.bournemouth.ac.uk/37986/

Source: Web of Science (Lite)

Struct2Hair: A hair shape descriptor for hairstyle modeling

Authors: Zhang, W., Nie, Y., Guo, S., Chang, J., Zhang, J.J. and Tong, R.

Journal: Computer Animation and Virtual Worlds

ISSN: 1546-4261

Abstract:

In recent years, it becomes possible to extract hair information for hair reconstruction from multiple cameras or monocular camera. Using a single image as the input avoids the high cost setups and complex calibration compared to multiviewed reconstruction. Taking advantage of an extendible hairstyle database, this paper introduced Struct2Hair, a novel single-viewed hair modelling approach by extracting hair shape descriptor (HSD). The HSD is defined as the fundamental structure-aware feature, which is a combination of critical shapes in a hairstyle. A complete dataset of critical hair shapes is constructed from a known database of three-dimensional (3D) hair models. We first analyze the input two-dimensional (2D) image to extract the orientation information and 2D hair sketch automatically. The extracted information is then used to retrieve the corresponding critical shapes with optimization to build the robust HSD. Finally, the HSD constructs a weighted 3D hair orientation field to guide full-head hair model generation. Our method can preserve local geometric features of hair and retain the whole shape of the hairstyle globally owing to the HSD, which will benefit further hair editing and stylization.

https://eprints.bournemouth.ac.uk/37986/

Source: BURO EPrints