Toward emotional recognition during HCI using marker-based automated video tracking

This data was imported from Scopus:

Authors: Söderström, U., Li, S., Claxton, H.L., Holmes, D.C., Ranji, T.T., Santos, C.P., Westling, C.E.I. and Witchel, H.J.

Journal: ECCE 2019 - Proceedings of the 31st European Conference on Cognitive Ergonomics: ''Design for Cognition''

Pages: 49-52

ISBN: 9781450371667

DOI: 10.1145/3335082.3335103

© 2019 Association for Computing Machinery. Postural movement of a seated person, as determined by lateral aspect video analysis, can be used to estimate learning-relevant emotions. In this article the motion of a person interacting with a computer is automatically extracted from a video by detecting the position of motion-tracking markers on the person’s body. The detection is done by detecting candidate areas for marker with a Convolutional Neural Network and the correct candidate areas are found by template matching. Several markers are detected in more than 99 % of the video frames while one is detected in only ≈ 80,2 % of the frames. The template matching can also detect the correct template in ≈ 80 of the frames. This means that almost always when the correct candidates are extracted, the template matching is successful. Suggestions for how the performance can be improved are given along with possible use of the marker positions for estimating sagittal plane motion.

The data on this page was last updated at 05:27 on January 25, 2021.