In modern spatial computing devices, users are confronted with diverse methods for object selection, including eye gaze (cf. Apple Vision Pro), hand gestures (cf. Microsoft HoloLens 2), touch gestures (cf. Google Glass Enterprise Edition 2), and external controllers (cf. Magic Leap 2). Although there are a plethora of empirical studies on which selection techniques perform best, a common limiting factor stems from the partly artificial setups. These typically exclude practical influences such as visual distraction.
In this paper, we present a user study comparing two hand-based and two gaze-based state-of-the-art selection methods, using the HoloLens 2. We extended a traditional Fitts’ law-inspired study design by incorporating a visual task that simulates changes in the user interface after a successful selection. Without a visual task, gaze-based techniques were on average faster than hand-based techniques. This performance gain was eliminated (for head gaze) or even reversed (for eye gaze) when the visual task was active. These findings underscore the value of continued practice-oriented research of targeting methods in virtual environments.
Free access to the definitive version of this paper in the ACM Digital Library: https://dl.acm.org/doi/10.1145/3641825.3687712?cid=99659576953
This work was supported by the German Federal Ministry of Education and Research (BMBF). The research for this paper was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2176 'Understanding Written Artefacts: Material, Interaction and Transmission in Manuscript Cultures', project no. 390893796. The research was conducted within the scope of the Centre for the Study of Manuscript Cultures (CSMC) at Universität Hamburg.