As robots operate in increasingly complex and dynamic environments, fast motion re-planning has become a widely explored area of research. In a real-world deployment, we often lack the ability to fully observe the environment at all times, giving rise to the challenge of determining how to best perceive the environment given a continuously updated motion plan. We provide the first investigation into a `smart' controller for gaze control with the objective of providing effective perception of the environment for obstacle avoidance and motion planning in dynamic and unknown environments. We detail the novel problem of determining the best head camera behaviour for mobile robots when constrained by a trajectory. Furthermore, we propose a greedy optimisation-based solution that uses a combination of voxelised rewards and motion primitives. We demonstrate that our method outperforms the benchmark methods in 2D and 3D environments, in respect of both the ability to explore the local surroundings, as well as in a superior success rate of finding collision-free trajectories -- our method is shown to provide 7.4x better map exploration while consistently achieving a higher success rate for generating collision-free trajectories. We verify our findings on a physical Toyota Human Support Robot (HSR) using a GPU-accelerated perception framework.
翻译:随着机器人在日益复杂和充满活力的环境中运作,快速运动重新规划已成为一个广泛探索的研究领域。在现实部署中,我们往往缺乏在任何时候都充分观测环境的能力,导致确定如何根据不断更新的动态计划最佳地看待环境的挑战。我们第一次调查“智能”控制器来控制凝视,目的是提供有效环境观念,以便在动态和未知的环境中避免障碍和进行运动规划。我们详细说明了在轨迹制约下确定移动机器人最佳头部摄像行为这一新问题。此外,我们提议采用贪婪的优化基于优化的解决方案,将氧化剂奖励和运动原始物结合起来。我们证明,在2D和3D环境中,我们的方法优于基准方法,既在探究当地环境的能力方面,又在寻找远离碰撞的轨迹的优胜率方面 -- -- 我们的方法显示,我们提供7.4x更好的地图勘探,同时在产生不碰撞的轨迹时,不断取得更高的成功率。我们用GPU校准了我们的物理丰田人类支持框架的研究结果。