Learning low-dimensional latent state space dynamics models has been a powerful paradigm for enabling vision-based planning and learning for control. We introduce a latent dynamics learning framework that is uniquely designed to induce proportional controlability in the latent space, thus enabling the use of much simpler controllers than prior work. We show that our learned dynamics model enables proportional control from pixels, dramatically simplifies and accelerates behavioural cloning of vision-based controllers, and provides interpretable goal discovery when applied to imitation learning of switching controllers from demonstration.
翻译:学习低维潜伏状态空间动态模型是有利于基于愿景的规划和学习以控制的空间动态模型的强大范例。 我们引入了一个潜在的动态动态学习框架,这一框架的独特设计是为了在潜在空间实现比例控制,从而使得能够使用比先前工作更简单的控制器。 我们显示,我们所学的动态模型能够从像素中进行比例控制,大大简化和加速基于愿景的控制器的行为克隆,并在应用来模仿转换控制器从演示中学习时提供可解释的目标发现。