Over the last few years, we have witnessed tremendous progress on many subtasks of autonomous driving, including perception, motion forecasting, and motion planning. However, these systems often assume that the car is accurately localized against a high-definition map. In this paper we question this assumption, and investigate the issues that arise in state-of-the-art autonomy stacks under localization error. Based on our observations, we design a system that jointly performs perception, prediction, and localization. Our architecture is able to reuse computation between both tasks, and is thus able to correct localization errors efficiently. We show experiments on a large-scale autonomy dataset, demonstrating the efficiency and accuracy of our proposed approach.
翻译:在过去几年里,我们在自主驾驶的许多子任务方面取得了巨大进展,包括感知、运动预测和运动规划。然而,这些系统往往假定汽车在高清晰度地图上是准确的。在本文中,我们质疑这一假设,调查在定位错误下最先进的自主堆堆中出现的问题。根据我们的观察,我们设计了一个共同进行感知、预测和本地化的系统。我们的建筑能够重复两种任务之间的计算,从而能够有效地纠正本地化错误。我们展示了大规模自主数据集的实验,显示了我们拟议方法的效率和准确性。