Gaze Modulated Disambiguation Technique for Gesture Control in 3D Virtual Objects Selection

This data was imported from Scopus:

Authors: Deng, S., Chang, J., Hu, S.M. and Zhang, J.J.

Journal: 2017 3rd IEEE International Conference on Cybernetics, CYBCONF 2017 - Proceedings

ISBN: 9781538622018

DOI: 10.1109/CYBConf.2017.7985779

© 2017 IEEE. Inputs with multimodal information provide more natural ways to interact with virtual 3D environment. An emerging technique that integrates gaze modulated pointing with mid-air gesture control enables fast target acquisition and rich control expressions. The performance of this technique relies on the eye tracking accuracy which is not comparable with the traditional pointing techniques (e.g., mouse) yet. This will cause troubles when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This paper proposes a coarse-to-fine solution to compensate the degradation introduced by eye tracking inaccuracy using a gaze cone to detect ambiguity and then a gaze probe for decluttering. It is tested in a comparative experiment which involves 12 participants with 3240 runs. The results show that the proposed technique enhanced the selection accuracy and user experience but it is still with a potential to be improved in efficiency. This study contributes to providing a robust multimodal interface design supported by both eye tracking and mid-air gesture control.

This source preferred by Jian Jun Zhang, Shujie Deng and Jian Chang

This data was imported from Web of Science (Lite):

Authors: Deng, S., Chang, J., Zhang, J.J., Hu, S.-M. and IEEE


Pages: 208-215

ISSN: 2475-6113

The data on this page was last updated at 04:48 on February 24, 2018.