Semantic modeling of indoor scenes with support inference from a single photograph

Authors: Nie, Y., Chang, J., Chaudhry, E., Guo, S., Smart, A. and Zhang, J.J.

Journal: Computer Animation and Virtual Worlds

Volume: 29

Issue: 3-4

eISSN: 1546-427X

ISSN: 1546-4261

DOI: 10.1002/cav.1825

Abstract:

We present an automatic approach for the semantic modeling of indoor scenes based on a single photograph, instead of relying on depth sensors. Without using handcrafted features, we guide indoor scene modeling with feature maps extracted by fully convolutional networks. Three parallel fully convolutional networks are adopted to generate object instance masks, a depth map, and an edge map of the room layout. Based on these high-level features, support relationships between indoor objects can be efficiently inferred in a data-driven manner. Constrained by the support context, a global-to-local model matching strategy is followed to retrieve the whole indoor scene. We demonstrate that the proposed method can efficiently retrieve indoor objects including situations where the objects are badly occluded. This approach enables efficient semantic-based scene editing.

https://eprints.bournemouth.ac.uk/30856/

Source: Scopus

Semantic modeling of indoor scenes with support inference from a single photograph

Authors: Nie, Y., Chang, J., Chaudhry, E., Guo, S., Smart, A. and Zhang, J.J.

Journal: COMPUTER ANIMATION AND VIRTUAL WORLDS

Volume: 29

Issue: 3-4

eISSN: 1546-427X

ISSN: 1546-4261

DOI: 10.1002/cav.1825

https://eprints.bournemouth.ac.uk/30856/

Source: Web of Science (Lite)

Semantic modeling of indoor scenes with support inference from a single photograph

Authors: Nie, Y., Chang, J., Chaudhry, E., Guo, S., Smart, A. and Zhang, J.J.

Journal: Computer Animation and Virtual Worlds

Volume: 29

Issue: 3-4

ISSN: 1546-4261

Abstract:

We present an automatic approach for the semantic modeling of indoor scenes based on a single photograph, instead of relying on depth sensors. Without using handcrafted features, we guide indoor scene modeling with feature maps extracted by fully convolutional networks. Three parallel fully convolutional networks are adopted to generate object instance masks, a depth map, and an edge map of the room layout. Based on these high-level features, support relationships between indoor objects can be efficiently inferred in a data-driven manner. Constrained by the support context, a global-to-local model matching strategy is followed to retrieve the whole indoor scene. We demonstrate that the proposed method can efficiently retrieve indoor objects including situations where the objects are badly occluded. This approach enables efficient semantic-based scene editing.

https://eprints.bournemouth.ac.uk/30856/

Source: BURO EPrints