Grasp User Interfaces (GPUIs) are well-suited for dual-tasking between virtual and physical tasks. These interfaces do not require users to release handheld objects, supporting microinteractions that happen in short bursts and cause minimal interruption on the physical task. Design approaches for these interfaces include user elicitation studies and expert-based strategies, which can be combined with computational techniques for quicker and more cost-effective iterations. Current computational tools for designing GPUIs utilize simulations based on kinematic, geometric, and biomechanical parameters. However, the relationship between these low-level factors and higher-level user preferences remains underexplored. In this study, we gathered user preferences using a two-alternative forced choice paradigm with single-finger reach tasks performed while holding objects representative of real-world activities with different grasp types. We present a quantitative analysis of how various low-level factors influence user preference in grasp interactions, identifying the most significant ones. Leveraging this analysis, we developed a predictive model to estimate user preference and integrated it into an existing simulation tool for GPUI design. In addition to enhancing the understanding of design factors in grasp interactions, our predictive model provides a spatial utility metric based on user preferences, paving the way for adaptive GPUI and mixed-initiative systems for better dual-tasking between virtual and physical environments.
翻译:暂无翻译