To assist robots in teleoperation tasks, haptic rendering which allows human operators access a virtual touch feeling has been developed in recent years. Most previous haptic rendering methods strongly rely on data collected by tactile sensors. However, tactile data is not widely available for robots due to their limited reachable space and the restrictions of tactile sensors. To eliminate the need for tactile data, in this paper we propose a novel method named as Vis2Hap to generate haptic rendering from visual inputs that can be obtained from a distance without physical interaction. We take the surface texture of objects as key cues to be conveyed to the human operator. To this end, a generative model is designed to simulate the roughness and slipperiness of the object's surface. To embed haptic cues in Vis2Hap, we use height maps from tactile sensors and spectrograms from friction coefficients as the intermediate outputs of the generative model. Once Vis2Hap is trained, it can be used to generate height maps and spectrograms of new surface textures, from which a friction image can be obtained and displayed on a haptic display. The user study demonstrates that our proposed Vis2Hap method enables users to access a realistic haptic feeling similar to that of physical objects. The proposed vision-based haptic rendering has the potential to enhance human operators' perception of the remote environment and facilitate robotic manipulation.
翻译:为了协助机器人执行远程操作任务,近年来已经开发了允许人类操作员获取虚拟触摸感觉的便利性转换方法。 以往的多数偶然性转换方法都非常依赖触摸传感器收集的数据。 但是,由于机器人的可达空间有限,而且触摸感应器受限,因此对机器人的触觉数据并不普遍可用。 为了消除对触觉数据的需求,我们在本文件中提议了一个名为Vis2Hap的新颖方法,用从远距离获得的视觉输入生成便利性转换,而无需物理互动。 我们把天体表面纹理作为关键提示,传递给人类操作员。 为此,设计了一个基因化模型,以模拟天体表面的粗度和滑度。为了在Vis2Hap中嵌入手提示,我们用触摸感传感器的高度图和摩擦系数的光谱作为基于基因感化模型的中间输出。 一旦对Vis2Hap进行了培训, 就可以用来生成新表面纹理物体的高度图和光谱图, 作为向人类操作员传递新表面纹理的提示的关键提示。 为此, 一个基因模型模型模型模型模型模型模型设计模型用来模拟模拟物体的图像展示了人类摩擦变的用户, 从而展示了人类图像的用户展示了。