Training visuomotor robot controllers from scratch on a new robot typically requires generating large amounts of robot-specific data. Could we leverage data previously collected on another robot to reduce or even completely remove this need for robot-specific data? We propose a "robot-aware" solution paradigm that exploits readily available robot "self-knowledge" such as proprioception, kinematics, and camera calibration to achieve this. First, we learn modular dynamics models that pair a transferable, robot-agnostic world dynamics module with a robot-specific, analytical robot dynamics module. Next, we set up visual planning costs that draw a distinction between the robot self and the world. Our experiments on tabletop manipulation tasks in simulation and on real robots demonstrate that these plug-in improvements dramatically boost the transferability of visuomotor controllers, even permitting zero-shot transfer onto new robots for the very first time. Project website: https://hueds.github.io/rac/
翻译:从零开始对新机器人进行相对机体机器人控制器的培训通常要求生成大量机器人特定数据。 我们能否利用先前在另一个机器人上收集的数据来减少甚至完全消除对机器人特定数据的需求? 我们提出一个“机器人觉知”解决方案范例,利用随时可用的机器人“自学”如自行感知、感官和相机校准来实现这一目标。 首先,我们学习模块化动态模型,将可转让的、机器人不可知的世界动态模块与机器人特定分析机器人动态模块配对。 其次,我们设置了视觉规划成本,区分机器人自我和世界。 我们在桌面操作上进行的模拟和真实机器人实验表明,这些插插件改进极大地促进了面型机器人控制器的可转移性,甚至允许首次零发式传输到新机器人上。 项目网站 https://hueds.github.io/rac/ https://hueds.github.