Collision-free motion generation in unknown environments is a core building block for robot manipulation. Generating such motions is challenging due to multiple objectives; not only should the solutions be optimal, the motion generator itself must be fast enough for real-time performance and reliable enough for practical deployment. A wide variety of methods have been proposed ranging from local controllers to global planners, often being combined to offset their shortcomings. We present an end-to-end neural model called Motion Policy Networks (M$\pi$Nets) to generate collision-free, smooth motion from just a single depth camera observation. M$\pi$Nets are trained on over 3 million motion planning problems in over 500,000 environments. Our experiments show that M$\pi$Nets are significantly faster than global planners while exhibiting the reactivity needed to deal with dynamic scenes. They are 46% better than prior neural planners and more robust than local control policies. Despite being only trained in simulation, M$\pi$Nets transfer well to the real robot with noisy partial point clouds. Code and data are publicly available at https://mpinets.github.io.
翻译:在未知环境中,在未知环境中不发生碰撞的动作是机器人操控的核心构件。 产生这种动作具有挑战性, 其原因有多重; 不仅应优化解决方案, 运动生成器本身必须足够快, 实时性能足够可靠, 实际部署也足够可靠。 已经提出了各种各样的方法, 从本地控制器到全球规划器, 通常可以合并来弥补它们的缺点。 我们提出了一个端到端的神经模型, 叫做“ 运动政策网络 ” ( M$\ pi$Nets), 以便从一个单一的深度摄像头观测中产生无碰撞、 顺利的动作。 M$\ pi$Nets在超过500 000个环境中接受了300万个运动规划问题的培训。 我们的实验显示, M$\ pi$Nets比全球规划器快得多, 同时展示了应对动态场景所需的再活动。 它们比先前的神经规划器更好46%, 比地方控制政策更强大。 尽管我们只是接受模拟培训, M$\ pi$Nets 向真实的机器人转移精准部分点云。 。 代码和数据在 http:// mpinhububio 上公开提供 。 。 。 http://mpintude and data