Current Virtual Reality (VR) environments lack the rich haptic signals that humans experience during real-life interactions, such as the sensation of texture during lateral movement on a surface. Adding realistic haptic textures to VR environments requires a model that generalizes to variations of a user's interaction and to the wide variety of existing textures in the world. Current methodologies for haptic texture rendering exist, but they usually develop one model per texture, resulting in low scalability. We present a deep learning-based action-conditional model for haptic texture rendering and evaluate its perceptual performance in rendering realistic texture vibrations through a multi part human user study. This model is unified over all materials and uses data from a vision-based tactile sensor (GelSight) to render the appropriate surface conditioned on the user's action in real time. For rendering texture, we use a high-bandwidth vibrotactile transducer attached to a 3D Systems Touch device. The result of our user study shows that our learning-based method creates high-frequency texture renderings with comparable or better quality than state-of-the-art methods without the need for learning a separate model per texture. Furthermore, we show that the method is capable of rendering previously unseen textures using a single GelSight image of their surface.
翻译:当前虚拟现实(VR)环境缺乏人类在现实生活互动中经历的丰富偶然的信号,比如在表面横向移动过程中的纹理感。在VR环境中添加现实的顺质纹理要求一种模型,该模型可以概括用户互动的变异和世界上现有的多种纹理。当前随机纹理的转换方法存在,但通常每个纹理都开发一种模型,从而降低可缩放性。我们展示了一个基于深层次学习的基于行动条件的模型,用于随机纹理生成,并评估其在通过多部分人类用户研究产生现实的纹理振动时的感知性性能。这个模型对基于视觉的感官传感器(GelSight)的所有材料进行统一,并使用基于视觉的感应感应感应传感器(GelSight)提供的数据,以使用户实时的动作具有适当的表面条件。为了提供纹理,我们使用与3D系统触摸设备连接的高频度振动性微感应感应感应感应感应器。我们的用户研究结果表明,我们基于高频感应的模型在通过多部分人类用户研究产生现实的感应感应感应感应感应的感应感应感应感应的感应感应的感应感应感应感应感应感应的感应感应感应方法,而不用地显示了我们以先前的感应的感应的感应的感应的感应的感应的感应的感应的感应的感应的感色色色素的感应法方法。