Tactile sensing is critical for robotic grasping and manipulation of objects under visual occlusion. However, in contrast to simulations of robot arms and cameras, current simulations of tactile sensors have limited accuracy, speed, and utility. In this work, we develop an efficient 3D finite element method (FEM) model of the SynTouch BioTac sensor using an open-access, GPU-based robotics simulator. Our simulations closely reproduce results from an experimentally-validated model in an industry-standard, CPU-based simulator, but at 75x the speed. We then learn latent representations for simulated BioTac deformations and real-world electrical output through self-supervision, as well as projections between the latent spaces using a small supervised dataset. Using these learned latent projections, we accurately synthesize real-world BioTac electrical output and estimate contact patches, both for unseen contact interactions. This work contributes an efficient, freely-accessible FEM model of the BioTac and comprises one of the first efforts to combine self-supervision, cross-modal transfer, and sim-to-real transfer for tactile sensors.
翻译:触摸感测对于在视觉封闭条件下对物体进行机器人捕捉和操纵至关重要,然而,与机器人武器和相机的模拟相比,目前对触摸感应器的模拟的准确性、速度和实用性有限。在这项工作中,我们利用一个开放的、基于GPU的机器人模拟器,开发了SynTouch BioTac传感器的高效的三维有限元素模型(FEM)模型(FEM)。我们的模拟在工业标准、基于CPU的模拟器中从实验性有效模型中密切复制结果,但速度为75x。我们随后通过自我监督观察了解模拟BioTac变形和真实世界电子输出的潜伏图示,并利用一个小型监督数据集对潜在空间进行预测。我们利用这些已学的潜伏预测,精确地合成了真实世界的BioTac电输出和估计接触间隔,两者都用于看不见的接触互动。这项工作有助于一个高效的、可自由获取的FEM模型,并构成将自我监督、跨式转移和真实感官传感器相结合的首批努力之一。