Action snapshot with single pose and viewpoint

This data was imported from Scopus:

Authors: Wang, M., Guo, S., Liao, M., He, D., Chang, J. and Zhang, J.

http://eprints.bournemouth.ac.uk/30541/

Journal: Visual Computer

Volume: 35

Issue: 4

Pages: 507-520

ISSN: 0178-2789

DOI: 10.1007/s00371-018-1479-9

© 2018, Springer-Verlag GmbH Germany, part of Springer Nature. Many art forms present visual content as a single image captured from a particular viewpoint. How to select a meaningful representative moment from an action performance is difficult, even for an experienced artist. Often, a well-picked image can tell a story properly. This is important for a range of narrative scenarios, such as journalists reporting breaking news, scholars presenting their research, or artists crafting artworks. We address the underlying structures and mechanisms of a pictorial narrative with a new concept, called the action snapshot, which automates the process of generating a meaningful snapshot (a single still image) from an input of scene sequences. The input of dynamic scenes could include several interactive characters who are fully animated. We propose a novel method based on information theory to quantitatively evaluate the information contained in a pose. Taking the selected top postures as input, a convolutional neural network is constructed and trained with the method of deep reinforcement learning to select a single viewpoint, which maximally conveys the information of the sequence. User studies are conducted to experimentally compare the computer-selected poses and viewpoints with those selected by human participants. The results show that the proposed method can assist the selection of the most informative snapshot effectively from animation-intensive scenarios.

This data was imported from Web of Science (Lite):

Authors: Wang, M., Guo, S., Liao, M., He, D., Chang, J. and Zhang, J.

http://eprints.bournemouth.ac.uk/30541/

Journal: VISUAL COMPUTER

Volume: 35

Issue: 4

Pages: 507-520

eISSN: 1432-2315

ISSN: 0178-2789

DOI: 10.1007/s00371-018-1479-9

The data on this page was last updated at 04:57 on May 24, 2019.