Humans learn about objects via interaction and using multiple perceptions, such as vision, sound, and touch. While vision can provide information about an object's appearance, non-visual sensors, such as audio and haptics, can provide information about its intrinsic properties, such as weight, temperature, hardness, and the object's sound. Using tools to interact with objects can reveal additional object properties that are otherwise hidden (e.g., knives and spoons can be used to examine the properties of food, including its texture and consistency). Robots can use tools to interact with objects and gather information about their implicit properties via non-visual sensors. However, a robot's model for recognizing objects using a tool-mediated behavior does not generalize to a new tool or behavior due to differing observed data distributions. To address this challenge, we propose a framework to enable robots to transfer implicit knowledge about granular objects across different tools and behaviors. The proposed approach learns a shared latent space from multiple robots' contexts produced by respective sensory data while interacting with objects using tools. We collected a dataset using a UR5 robot that performed 5,400 interactions using 6 tools and 6 behaviors on 15 granular objects and tested our method on cross-tool and cross-behavioral transfer tasks. Our results show the less experienced target robot can benefit from the experience gained from the source robot and perform recognition on a set of novel objects. We have released the code, datasets, and additional results: https://github.com/gtatiya/Tool-Knowledge-Transfer.
翻译:人类可以通过互动和使用视觉、 声音和触摸等多重感知来了解物体。 虽然视觉可以提供关于物体外观的信息, 但是非视觉感应器, 如音频和机智等, 可以提供关于物体内在特性的信息, 如重量、 温度、 硬度 和物体声音 。 使用工具与物体互动可以揭示其他以其他方式隐藏的物体属性( 例如, 刀和勺子可以用来检查食物的特性, 包括其质地和一致性 ) 。 机器人可以使用工具与物体互动, 并通过非视觉感应器收集其隐含属性的信息。 然而, 使用工具介质的行为来识别物体的非视觉感应感应器传感器, 机器人识别物体的模型不能概括于新工具或行为, 因为观测到的数据分布不同。 为了应对这一挑战, 我们提议了一个框架, 使机器人能够将关于颗粒物体的隐含知识传输到不同的工具和行为。 提议的方法可以从多个机器人的感官环境中学习共同的潜空隙空间, 同时使用工具进行互动。 我们用一个数据集收集了一个数据, 使用一种UR5 机器人的机器人的机器人, 从5 400 的物体进行交叉感应验结果, 进行5, 进行交叉感应验结果, 和16 显示我们的数据结果, 我们用6 测试的变变变换的结果 。</s>