While visual imitation learning offers one of the most effective ways of learning from visual demonstrations, generalizing from them requires either hundreds of diverse demonstrations, task specific priors, or large, hard-to-train parametric models. One reason such complexities arise is because standard visual imitation frameworks try to solve two coupled problems at once: learning a succinct but good representation from the diverse visual data, while simultaneously learning to associate the demonstrated actions with such representations. Such joint learning causes an interdependence between these two problems, which often results in needing large amounts of demonstrations for learning. To address this challenge, we instead propose to decouple representation learning from behavior learning for visual imitation. First, we learn a visual representation encoder from offline data using standard supervised and self-supervised learning methods. Once the representations are trained, we use non-parametric Locally Weighted Regression to predict the actions. We experimentally show that this simple decoupling improves the performance of visual imitation models on both offline demonstration datasets and real-robot door opening compared to prior work in visual imitation. All of our generated data, code, and robot videos are publicly available at https://jyopari.github.io/VINN/.
翻译:虽然视觉模拟学习是从视觉演示中学习的最有效方法之一,但从中推广这些方法需要数百种不同的演示、特定任务前期或大型的难以训练的模拟模型。之所以出现这种复杂性,原因之一是标准的视觉模拟框架试图同时解决两个同时存在的问题:从各种视觉数据中学习简洁但良好的表述,同时学习将所显示的行动与这种表述联系起来。这种联合学习导致这两个问题之间的相互依存,这往往导致需要大量演示来学习。为了应对这一挑战,我们建议从视觉模拟的行为学习中学习视觉代表。首先,我们利用标准的监督和自我监督的学习方法从离线数据中学习视觉代表编码器。一旦进行演示,我们就使用非参数的局部视觉反射法来预测行动。我们实验性地显示,这种简单的脱钩改进了在离线演示数据集和真实机器人门的视觉模拟模型的性能,与先前的视觉模仿工作相比,我们生成的数据、代码和机器人视频视频都公开提供。