Adverse weather conditions, including snow, rain, and fog pose a challenge for both human and computer vision in outdoor scenarios. Handling these environmental conditions is essential for safe decision making, especially in autonomous vehicles, robotics, and drones. Most of today's supervised imaging and vision approaches, however, rely on training data collected in the real world that is biased towards good weather conditions, with dense fog, snow, and heavy rain as outliers in these datasets. Without training data, let alone paired data, existing autonomous vehicles often limit themselves to good conditions and stop when dense fog or snow is detected. In this work, we tackle the lack of supervised training data by combining synthetic and indirect supervision. We present ZeroScatter, a domain transfer method for converting RGB-only captures taken in adverse weather into clear daytime scenes. ZeroScatter exploits model-based, temporal, multi-view, multi-modal, and adversarial cues in a joint fashion, allowing us to train on unpaired, biased data. We assess the proposed method using real-world captures, and the proposed method outperforms existing monocular de-scattering approaches by 2.8 dB PSNR on controlled fog chamber measurements.
翻译:在户外情景中,包括雪、雨和雾雾在内的不利天气条件对人和计算机的视觉都构成挑战。处理这些环境条件对于安全决策至关重要,特别是在自主车辆、机器人和无人驾驶飞机中。然而,今天大多数受监督的成像和视觉方法都依赖在现实世界中收集的培训数据,而这些数据偏向于良好的天气条件,其中浓雾、积雪和大雨是数据集的外源。没有培训数据,更不用说配对数据,现有自主车辆往往局限于良好的条件,在发现浓雾或积雪时停止。在这项工作中,我们通过综合合成和间接监督来应对缺乏受监督的培训数据的问题。我们展示了ZeroScatter,这是将有害天气中只捕获的RGB转化为清晰的日光场的域传输方法。ZeroStatter利用基于模型、时间、多视角、多模式和对抗性信号,使我们能够在发现不成熟、有偏差的数据时进行训练。我们用现实世界捕捉来评估拟议的方法,以及用拟议方法超过2.8的磁度测量方式控制现有单心室。