Visual perception in autonomous driving is a crucial part of a vehicle to navigate safely and sustainably in different traffic conditions. However, in bad weather such as heavy rain and haze, the performance of visual perception is greatly affected by several degrading effects. Recently, deep learning-based perception methods have addressed multiple degrading effects to reflect real-world bad weather cases but have shown limited success due to 1) high computational costs for deployment on mobile devices and 2) poor relevance between image enhancement and visual perception in terms of the model ability. To solve these issues, we propose a task-driven image enhancement network connected to the high-level vision task, which takes in an image corrupted by bad weather as input. Specifically, we introduce a novel low memory network to reduce most of the layer connections of dense blocks for less memory and computational cost while maintaining high performance. We also introduce a new task-driven training strategy to robustly guide the high-level task model suitable for both high-quality restoration of images and highly accurate perception. Experiment results demonstrate that the proposed method improves the performance among lane and 2D object detection, and depth estimation largely under adverse weather in terms of both low memory and accuracy.
翻译:自主驾驶的视觉感知是在不同交通条件下安全和可持续行驶的车辆的一个关键部分,然而,在暴雨和雾等恶劣天气中,视觉感知的表现受到若干有辱人格的影响。最近,深层次的学习感知方法解决了多种有辱人格的影响,以反映真实世界恶劣天气案例,但由于以下原因,结果有限:(1) 移动设备部署的计算成本高,(2) 图像增强与视觉感知在模型能力方面的相关性差。为了解决这些问题,我们提议建立一个任务驱动的图像增强网络,与高层次的视觉任务挂钩,而高层次的视觉任务则以坏天气作为素材。具体地说,我们引入了一个新颖的低记忆网络,以减少密集区块的层连接,降低记忆和计算成本,同时保持高性能。我们还引入了新的任务驱动培训战略,以有力指导高层次的任务模式,既适合高质量恢复图像,又适合高度准确感知力。实验结果表明,拟议的方法提高了航道和2D天体探测的性能,深度估计主要是在低记忆和准确度的不利天气下进行。