Optical flow has achieved great success under clean scenes, but suffers from restricted performance under foggy scenes. To bridge the clean-to-foggy domain gap, the existing methods typically adopt the domain adaptation to transfer the motion knowledge from clean to synthetic foggy domain. However, these methods unexpectedly neglect the synthetic-to-real domain gap, and thus are erroneous when applied to real-world scenes. To handle the practical optical flow under real foggy scenes, in this work, we propose a novel unsupervised cumulative domain adaptation optical flow (UCDA-Flow) framework: depth-association motion adaptation and correlation-alignment motion adaptation. Specifically, we discover that depth is a key ingredient to influence the optical flow: the deeper depth, the inferior optical flow, which motivates us to design a depth-association motion adaptation module to bridge the clean-to-foggy domain gap. Moreover, we figure out that the cost volume correlation shares similar distribution of the synthetic and real foggy images, which enlightens us to devise a correlation-alignment motion adaptation module to distill motion knowledge of the synthetic foggy domain to the real foggy domain. Note that synthetic fog is designed as the intermediate domain. Under this unified framework, the proposed cumulative adaptation progressively transfers knowledge from clean scenes to real foggy scenes. Extensive experiments have been performed to verify the superiority of the proposed method.
翻译:光流在清晰情境下已经取得了巨大成功,但在雾天情境下表现欠佳。为了弥合清晰领域到合成雾领域的领域差距,现有方法通常采用领域自适应来转移从清晰到合成雾领域的运动知识。然而,这些方法却意外忽略了合成到真实领域的领域差距,因此在应用于实际的场景中时会出现错误。为了处理实际雾场下的光流问题,本文提出了一种全新的无监督累积领域自适应光流 (UCDA-Flow) 框架:深度相关运动自适应和相关对齐运动自适应。具体来说,我们发现深度是影响光流的关键因素:深度越深,光流表现越差,这激励我们设计了一个深度相关运动自适应模块来弥合清晰领域到雾领域的差距。此外,我们发现成本体积相关具有合成和真实雾图像的类似分布,启发我们设计了一个相关对齐运动自适应模块,将合成雾领域的运动知识提取到真实雾领域中。需要注意的是,合成雾被设计为中间领域。在这个统一的框架下,所提出的累积自适应模块逐步将知识从清晰场景转移到了真实雾场景中。通过大量的实验,验证了新方法的优越性。