Detecting objects under adverse weather and lighting conditions is crucial for the safe and continuous operation of an autonomous vehicle, and remains an unsolved problem. We present a Gated Differentiable Image Processing (GDIP) block, a domain-agnostic network architecture, which can be plugged into existing object detection networks (e.g., Yolo) and trained end-to-end with adverse condition images such as those captured under fog and low lighting. Our proposed GDIP block learns to enhance images directly through the downstream object detection loss. This is achieved by learning parameters of multiple image pre-processing (IP) techniques that operate concurrently, with their outputs combined using weights learned through a novel gating mechanism. We further improve GDIP through a multi-stage guidance procedure for progressive image enhancement. Finally, trading off accuracy for speed, we propose a variant of GDIP that can be used as a regularizer for training Yolo, which eliminates the need for GDIP-based image enhancement during inference, resulting in higher throughput and plausible real-world deployment. We demonstrate significant improvement in detection performance over several state-of-the-art methods through quantitative and qualitative studies on synthetic datasets such as PascalVOC, and real-world foggy (RTTS) and low-lighting (ExDark) datasets.
翻译:在不利的天气和照明条件下检测物体对于自主飞行器的安全和持续运行至关重要,仍然是一个尚未解决的问题。我们提出了一个Gated可区别图像处理块(GDIP)块,这是一个域-不可知网络结构,可以插入现有的天体探测网络(例如Yolo),并经过培训,端对端配有不利状况图像,例如雾和低光下捕获的图像。我们提议的GDIP块通过下游天体探测损失直接学习增强图像。这通过同时运行的多图像预处理技术的学习参数实现,同时使用通过新颖的格子机制学习的重量组合其产出。我们通过渐进图像增强的多阶段指导程序进一步改进GDIP。最后,我们提出一个GDIP的变式,可以用作培训Yolo的常规动力,从而消除了在下游天体探测以GDIP为基础的图像增强的需要,从而实现更高的吞吐量和可信的真实世界部署。我们通过对若干州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州