We present a novel trajectory traversability estimation and planning algorithm for robot navigation in complex outdoor environments. We incorporate multimodal sensory inputs from an RGB camera, 3D LiDAR, and robot's odometry sensor to train a prediction model to estimate candidate trajectories' success probabilities based on partially reliable multi-modal sensor observations. We encode high-dimensional multi-modal sensory inputs to low-dimensional feature vectors using encoder networks and represent them as a connected graph to train an attention-based Graph Neural Network (GNN) model to predict trajectory success probabilities. We further analyze the image and point cloud data separately to quantify sensor reliability to augment the weights of the feature graph representation used in our GNN. During runtime, our model utilizes multi-sensor inputs to predict the success probabilities of the trajectories generated by a local planner to avoid potential collisions and failures. Our algorithm demonstrates robust predictions when one or more sensor modalities are unreliable or unavailable in complex outdoor environments. We evaluate our algorithm's navigation performance using a Spot robot in real-world outdoor environments.
翻译:我们为复杂的室外环境中的机器人导航提供了新的轨迹跨度估计和规划算法。 我们从RGB相机、 3D LiDAR 和机器人的odograph 传感器中引入了多式感应器,以根据部分可靠的多式传感器观测结果,对候选轨迹的成功概率进行预测模型培训。 我们用编码器网络将高维多式多式感应器输入输入到低维特性矢量器中,并将其作为一个连接图解,用于培训一个以注意力为基础的图形神经网络模型,以预测轨迹成功概率。我们进一步分析图像和点云数据,以便分别量化传感器可靠性,增加我们GNN所使用的特征图示的重量。在运行期间,我们的模型利用多传感器投入来预测由本地规划器生成的轨迹成功概率,以避免潜在的碰撞和失败。我们的算法显示在复杂的室外环境中一种或一种以上的传感器模式不可靠或不可用时的可靠预测。我们用一个现场机器人评估了我们的算法的导航性表现。