Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation

Authors: Deng, S., Jiang, N., Chang, J., Guo, S. and Zhang, J.J.

Journal: International Journal of Human Computer Studies

Volume: 105

Pages: 68-80

eISSN: 1095-9300

ISSN: 1071-5819

DOI: 10.1016/j.ijhcs.2017.04.002

Abstract:

Multimodal interactions provide users with more natural ways to manipulate virtual 3D objects than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform object selection and manipulation in a virtual space conveniently through the use of a combination of gaze and other interaction techniques (e.g., mid-air gestures). As gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on the user's perception on the exact spatial mapping between the virtual space and the physical space. An underexplored issue is, when the spatial mapping differs with the user's perception, manipulation errors (e.g., out of boundary errors, proximity errors) may occur. Therefore, in gaze modulated pointing, as gaze can introduce misalignment of the spatial mapping, it may lead to user's misperception of the virtual environment and consequently manipulation errors. This paper provides a clear definition of the problem through a thorough investigation on its causes and specifies the conditions when it occurs, which is further validated in the experiment. It also proposes three methods (Scaling, Magnet and Dual-gaze) to address the problem and examines them using a comparative study which involves 20 participants with 1040 runs. The results show that all three methods improved the manipulation performance with regard to the defined problem where Magnet and Dual-gaze delivered better performance than Scaling. This finding could be used to inform a more robust multimodal interface design supported by both eye tracking and mid-air gesture control without losing efficiency and stability.

https://eprints.bournemouth.ac.uk/29272/

Source: Scopus

Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation

Authors: Deng, S., Jiang, N., Chang, J., Guo, S. and Zhang, J.J.

Journal: INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES

Volume: 105

Pages: 68-80

eISSN: 1095-9300

ISSN: 1071-5819

DOI: 10.1016/j.ijhcs.2017.04.002

https://eprints.bournemouth.ac.uk/29272/

Source: Web of Science (Lite)

Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation

Authors: Deng, S., Jiang, N., Chang, J., Guo, S. and Zhang, J.J.

Journal: INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES

Volume: 105

Pages: 68-80

eISSN: 1095-9300

ISSN: 1071-5819

DOI: 10.1016/j.thcs.2017.04.002

https://eprints.bournemouth.ac.uk/29272/

Source: Manual

Preferred by: Nan Jiang

Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation.

Authors: Deng, S., Jiang, N., Chang, J., Guo, S. and Zhang, J.J.

Journal: International Journal of Human Computer Studies

Volume: 105

Pages: 68-80

ISSN: 1071-5819

Abstract:

Multimodal interactions provide users with more natural ways to manipulate virtual 3D objects than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform object selection and manipulation in a virtual space conveniently through the use of a combination of gaze and other interaction techniques (e.g., mid-air gestures). As gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on the user's perception on the exact spatial mapping between the virtual space and the physical space. An underexplored issue is, when the spatial mapping differs with the user's perception, manipulation errors (e.g., out of boundary errors, proximity errors) may occur. Therefore, in gaze modulated pointing, as gaze can introduce misalignment of the spatial mapping, it may lead to user's misperception of the virtual environment and consequently manipulation errors. This paper provides a clear definition of the problem through a thorough investigation on its causes and specifies the conditions when it occurs, which is further validated in the experiment. It also proposes three methods (Scaling, Magnet and Dual-gaze) to address the problem and examines them using a comparative study which involves 20 participants with 1040 runs. The results show that all three methods improved the manipulation performance with regard to the defined problem where Magnet and Dual-gaze delivered better performance than Scaling. This finding could be used to inform a more robust multimodal interface design supported by both eye tracking and mid-air gesture control without losing efficiency and stability.

https://eprints.bournemouth.ac.uk/29272/

Source: BURO EPrints