For the best human-robot interaction experience, the robot's navigation policy should take into account personal preferences of the user. In this paper, we present a learning framework complemented by a perception pipeline to train a depth vision-based, personalized navigation controller from user demonstrations. Our refined virtual reality interface enables the demonstration of robot navigation trajectories under motion of the user for dynamic interaction scenarios. In a detailed analysis, we evaluate different configurations of the perception pipeline. As the experiments demonstrate, our new pipeline compresses the perceived depth images to a latent state representation and, thus, enables efficient reasoning about the robot's dynamic environment to the learning. We discuss the robot's navigation performance in various virtual scenes by enrolling a variational autoencoder in combination with a motion predictor and demonstrate the first personalized robot navigation controller that solely relies on depth images.
翻译:对于人类-机器人互动的最佳经验,机器人的导航政策应该考虑到用户的个人偏好。 在本文中,我们提出了一个学习框架,辅之以一个感知管道,以从用户演示中培养一个深度视觉化、个性化的导航控制器。我们精细的虚拟现实界面能够演示用户为动态互动情景而启动的机器人导航轨迹。在一项详细分析中,我们评估了感知管道的不同配置。正如实验所显示的那样,我们的新管道将感知的深度图像压缩到潜伏状态中,从而能够对机器人的动态环境进行高效的推理到学习中。我们讨论机器人在各种虚拟场的导航性能,将变形自动编码器与运动预测器结合,并展示第一台完全依赖深度图像的个性化机器人导航控制器。