Gaze modulated disambiguation technique for gesture control in 3D virtual objects selection

Authors: Deng, S., Chang, J., Hu, S.M. and Zhang, J.J.

Journal: 2017 3rd IEEE International Conference on Cybernetics, CYBCONF 2017 - Proceedings

DOI: 10.1109/CYBConf.2017.7985779

Abstract:

Inputs with multimodal information provide more natural ways to interact with virtual 3D environment. An emerging technique that integrates gaze modulated pointing with mid-air gesture control enables fast target acquisition and rich control expressions. The performance of this technique relies on the eye tracking accuracy which is not comparable with the traditional pointing techniques (e.g., mouse) yet. This will cause troubles when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This paper proposes a coarse-to-fine solution to compensate the degradation introduced by eye tracking inaccuracy using a gaze cone to detect ambiguity and then a gaze probe for decluttering. It is tested in a comparative experiment which involves 12 participants with 3240 runs. The results show that the proposed technique enhanced the selection accuracy and user experience but it is still with a potential to be improved in efficiency. This study contributes to providing a robust multimodal interface design supported by both eye tracking and mid-air gesture control.

https://eprints.bournemouth.ac.uk/29651/

Source: Scopus

Gaze Modulated Disambiguation Technique for Gesture Control in 3D Virtual Objects Selection

Authors: Deng, S., Chang, J., Zhang, J.J. and Hu, S.-M.

Journal: 2017 3RD IEEE INTERNATIONAL CONFERENCE ON CYBERNETICS (CYBCONF)

Pages: 208-215

ISSN: 2475-6113

https://eprints.bournemouth.ac.uk/29651/

Source: Web of Science (Lite)

Gaze modulated disambiguation technique for gesture control in 3D virtual objects selection

Authors: Deng, S., Chang, J., Hu, S.M. and Zhang, J.J.

Conference: 3rd IEEE International Conference on Cybernetics (CYBCONF)

ISBN: 9781538622018

Abstract:

© 2017 IEEE. Inputs with multimodal information provide more natural ways to interact with virtual 3D environment. An emerging technique that integrates gaze modulated pointing with mid-air gesture control enables fast target acquisition and rich control expressions. The performance of this technique relies on the eye tracking accuracy which is not comparable with the traditional pointing techniques (e.g., mouse) yet. This will cause troubles when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This paper proposes a coarse-to-fine solution to compensate the degradation introduced by eye tracking inaccuracy using a gaze cone to detect ambiguity and then a gaze probe for decluttering. It is tested in a comparative experiment which involves 12 participants with 3240 runs. The results show that the proposed technique enhanced the selection accuracy and user experience but it is still with a potential to be improved in efficiency. This study contributes to providing a robust multimodal interface design supported by both eye tracking and mid-air gesture control.

https://eprints.bournemouth.ac.uk/29651/

Source: BURO EPrints