Legged robots have achieved remarkable performance in blind walking using either model-based control or data-driven deep reinforcement learning. To proactively navigate and traverse various terrains, active use of visual perception becomes indispensable, and this work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments. We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data. We specifically select state observations and design a training curriculum to learn feedback control policies more effectively over a range of different terrains. Using an extensive benchmark, we validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate of traversal. Particularly, the robot performs autonomous perceptual locomotion with minimal visual perception using depth measurements, which are easily available from a Lidar or RGB-D sensor, and successfully demonstrates robust ascent and descent over high stairs of 20 cm step height, i.e., 50% of its leg length.
翻译:在使用模型控制或数据驱动的深层强化学习中,牵引的机器人在盲人行走中取得了显著的成绩。为了积极主动地巡视和穿越各种地形,积极使用视觉感知变得不可或缺,这项工作的目的是利用稀有的视觉观测,在以人为中心的环境中,在一系列常见的撞车、斜坡和楼梯上实现感知性动动。我们首先选择能够代表不同兴趣表面的最小视觉输入,并提议一个学习框架,将这种外观感知和自主感知数据结合起来。我们特别选择了州观测和设计培训课程,以便更有效地学习不同地形的反馈控制政策。我们使用广泛的基准,验证了在需要全向地在平地上行走和在有障碍的地形上前行动性动性动性动性工作上所学的政策,显示了很高的穿行率。特别是,机器人通过从利达尔或RGBD传感器很容易获得的深度测量,在高20厘米高度的楼梯上,成功地展示了稳健健健健健健健健的脚下。