As augmented reality technology and hardware become more mature and affordable, researchers have been exploring more intuitive and discoverable interaction techniques for immersive environments. In this paper, we investigate multimodal interaction for 3D object manipulation in a multi-object virtual environment. To identify the user-defined gestures, we conducted an elicitation study involving 24 participants for 22 referents with an augmented reality headset. It yielded 528 proposals and generated a winning gesture set with 25 gestures after binning and ranking all gesture proposals. We found that for the same task, the same gesture was preferred for both one and two object manipulation, although both hands were used in the two object scenario. We presented the gestures and speech results, and the differences compared to similar studies in a single object virtual environment. The study also explored the association between speech expressions and gesture stroke during object manipulation, which could improve the recognizer efficiency in augmented reality headsets.
翻译:随着扩大的现实技术和硬件变得更加成熟和负担得起,研究人员一直在探索更直观和可发现的互动技术,用于隐蔽的环境。在本文件中,我们调查了在多物体虚拟环境中3D物体操纵的多式联运互动。为了确定用户定义的姿态,我们进行了一项有24名参与者参与的启发性研究,涉及22个推荐人,并配有增强的现实耳机。它产生了528项提案,并产生了一套赢得性的姿态,在宾入后25个手势,对所有手势建议进行排序。我们发现,同一任务中,1个和2个物体操纵都倾向于同样的手势,尽管在两种物体设想中都使用了手势和手势。我们介绍了手势和言语结果,以及与单一物体虚拟环境中类似研究的差别。研究还探讨了在物体操纵期间的言语表达和手势手势之间的联系,这可以提高识别者在增强现实耳机方面的效率。