A powerful paradigm for sensorimotor control is to predict actions from observations directly. Training such an end-to-end system allows representations that are useful for the downstream tasks to emerge automatically. In visual navigation, an agent can learn to navigate without any manual designs by correlating how its views change with the actions being taken. However, the lack of inductive bias makes this system data-inefficient and impractical in scenarios like search and rescue, where interacting with the environment to collect data is costly. We hypothesize a sufficient representation of the current view and the goal view for a navigation policy can be learned by predicting the location and size of a crop of the current view that corresponds to the goal. We further show that training such random crop prediction in a self-supervised fashion purely on random noise images transfers well to natural home images. The learned representation can then be bootstrapped to learn a navigation policy efficiently with little interaction data. Code is available at https://github.com/yanweiw/noise2ptz.
翻译:感官控制的一个强大范例是直接预测观测行动; 培训这样一个端到端系统可以进行有助于下游任务自动出现的演示。 在视觉导航中,一个代理人可以通过将其观点的变化与正在采取的行动联系起来,学习在没有手工设计的情况下进行导航; 然而,由于缺乏感官控制,在搜索和救援等情形中,这种系统的数据效率低且不切实际,在搜索和救援等情形中,与环境互动收集数据的费用很高。 我们假设目前视图和导航政策的目标视图有足够的代表性,可以通过预测与目标相对应的当前视图的作物位置和大小来学习。 我们还显示,这种随机作物预测只以自我监督的方式进行,只培训随机噪音图像向自然家用图像传输。 学习后,可以借助很少的互动数据有效地学习导航政策。 代码可在 https://github.com/yanweew/noise2ptz 上查阅。