The challenges presented in an autonomous racing situation are distinct from those faced in regular autonomous driving and require faster end-to-end algorithms and consideration of a longer horizon in determining optimal current actions keeping in mind upcoming maneuvers and situations. In this paper, we propose an end-to-end method for autonomous racing that takes in as inputs video information from an onboard camera and determines final steering and throttle control actions. We use the following split to construct such a method (1) learning a low dimensional representation of the scene, (2) pre-generating the optimal trajectory for the given scene, and (3) tracking the predicted trajectory using a classical control method. In learning a low-dimensional representation of the scene, we use intermediate representations with a novel unsupervised trajectory planner to generate expert trajectories, and hence utilize them to directly predict race lines from a given front-facing input image. Thus, the proposed algorithm employs the best of two worlds - the robustness of learning-based approaches to perception and the accuracy of optimization-based approaches for trajectory generation in an end-to-end learning-based framework. We deploy and demonstrate our framework on CARLA, a photorealistic simulator for testing self-driving cars in realistic environments.
翻译:自主赛中出现的挑战不同于在常规自主驾驶中面临的挑战,需要更快的端对端算法和考虑更长远的视野,以确定最佳的当前行动,同时铭记即将到来的动作和情况。在本文件中,我们提议了自动赛的端对端方法,将机上摄像头的视频信息作为投入,并确定最后方向和节流控制行动。我们用以下的分法来构建这样一种方法:(1) 学习低维的场景代表,(2) 预先为特定场景创造最佳轨迹,(3) 使用古典控制方法跟踪预测的轨迹。在学习低维度场景代表时,我们使用与新颖的不受监督的轨迹规划器进行中间演示,以产生专家轨迹,从而利用它们直接预测一个特定前方输入图像的种族线。因此,拟议的算法采用了两个世界的最好方法,即以学习为基础的感知的稳健性和最佳方法,以及利用基于优化的轨迹生成的轨迹方法在以最终到终端学习的框架中的准确性。我们在现实的CAR-LA环境上部署和展示我们的现实型汽车框架,用于自我模拟测试。