Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View

Authors: Liu, C., Chen, W., Ward, J. and Takahashi, N.

http://eprints.bournemouth.ac.uk/24488/

Journal: Scientific Reports

Publisher: Nature Publishing Group: Open Access Journals - Option C

ISSN: 2045-2322

DOI: 10.1038/srep31001

Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.

This data was imported from PubMed:

Authors: Liu, C.H., Chen, W., Ward, J. and Takahashi, N.

http://eprints.bournemouth.ac.uk/24488/

Journal: Sci Rep

Volume: 6

Pages: 31001

eISSN: 2045-2322

DOI: 10.1038/srep31001

Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.

This data was imported from Scopus:

Authors: Liu, C.H., Chen, W., Ward, J. and Takahashi, N.

http://eprints.bournemouth.ac.uk/24488/

Journal: Scientific Reports

Volume: 6

eISSN: 2045-2322

DOI: 10.1038/srep31001

Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.

This data was imported from Web of Science (Lite):

Authors: Liu, C.H., Chen, W., Ward, J. and Takahashi, N.

http://eprints.bournemouth.ac.uk/24488/

Journal: SCIENTIFIC REPORTS

Volume: 6

ISSN: 2045-2322

DOI: 10.1038/srep31001

This data was imported from Europe PubMed Central:

Authors: Liu, C.H., Chen, W., Ward, J. and Takahashi, N.

http://eprints.bournemouth.ac.uk/24488/

Journal: Scientific reports

Volume: 6

Pages: 31001

eISSN: 2045-2322

Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.

The data on this page was last updated at 04:51 on July 17, 2018.