Optical sensors and learning algorithms for autonomous vehicles have dramatically advanced in the past few years. Nonetheless, the reliability of today's autonomous vehicles is hindered by the limited line-of-sight sensing capability and the brittleness of data-driven methods in handling extreme situations. With recent developments of telecommunication technologies, cooperative perception with vehicle-to-vehicle communications has become a promising paradigm to enhance autonomous driving in dangerous or emergency situations. We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving. Our model encodes LiDAR information into compact point-based representations that can be transmitted as messages between vehicles via realistic wireless channels. To evaluate our model, we develop AutoCastSim, a network-augmented driving simulation framework with example accident-prone scenarios. Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate over egocentric driving models in these challenging driving situations and a 5 times smaller bandwidth requirement than prior work V2VNet. COOPERNAUT and AUTOCASTSIM are available at https://ut-austin-rpl.github.io/Coopernaut/.
翻译:过去几年来,自主车辆的光学感应器和学习算法取得了巨大进步,然而,当今自主车辆的可靠性却受到下述因素的阻碍:光线感测能力有限,而且数据驱动方法在处理极端情况方面作用不强。随着电信技术的最近发展,对车辆到车辆通信的合作观已成为在危险或紧急情况下加强自主驾驶的一个有希望的模式。我们引入了利用跨车辆感知来进行基于愿景的合作驾驶的端到端学习模式COOPERNAUT。我们的模型将LiDAR信息编码为紧凑的点基显示器,可以通过现实的无线频道作为车辆之间的信息传递。为了评估我们的模型,我们开发了AutoCastSim,这是一个网络强化的驾驶模拟框架,以事故易发性为范例。我们在AutoCastSim的实验表明,我们的合作感应驱动模型导致在挑战性驾驶情况下比以自我为中心的驱动模型的平均成功率提高40%,而带宽要求比以前的工作V2VNet少5倍。COOPERNAUNAUT和AUTOCSATOSTROTIM/AURTIO.ANSIO.ANSLIOOO可以查到。