Optical flow estimation is a basic task in self-driving and robotics systems, which enables to temporally interpret traffic scenes. Autonomous vehicles clearly benefit from the ultra-wide Field of View (FoV) offered by 360{\deg} panoramic sensors. However, due to the unique imaging process of panoramic cameras, models designed for pinhole images do not directly generalize satisfactorily to 360{\deg} panoramic images. In this paper, we put forward a novel network framework--PanoFlow, to learn optical flow for panoramic images. To overcome the distortions introduced by equirectangular projection in panoramic transformation, we design a Flow Distortion Augmentation (FDA) method, which contains radial flow distortion (FDA-R) or equirectangular flow distortion (FDA-E). We further look into the definition and properties of cyclic optical flow for panoramic videos, and hereby propose a Cyclic Flow Estimation (CFE) method by leveraging the cyclicity of spherical images to infer 360{\deg} optical flow and converting large displacement to relatively small displacement. PanoFlow is applicable to any existing flow estimation method and benefits from the progress of narrow-FoV flow estimation. In addition, we create and release a synthetic panoramic dataset Flow360 based on CARLA to facilitate training and quantitative analysis. PanoFlow achieves state-of-the-art performance on the public OmniFlowNet and the established Flow360 benchmarks. Our proposed approach reduces the End-Point-Error (EPE) on Flow360 by 27.3%. On OmniFlowNet, PanoFlow achieves an EPE of 3.17 pixels, a 55.5% error reduction from the best published result. We also qualitatively validate our method via a collection vehicle and a public real-world OmniPhotos dataset, indicating strong potential and robustness for real-world navigation applications. Code and dataset are publicly available at https://github.com/MasterHow/PanoFlow.
翻译:光流估计是自我驱动和机器人系统的基本任务 360, 它使得能够对交通场景进行时间性解释。 自动车辆显然受益于360 = = deg} 全色传感器提供的超全域视野(FoV) 。 但是,由于全色相机的独特成像过程, 设计用于针孔图像的模型不能直接令人满意地概括到 360 = deg} 全色图像。 在本文中, 我们提出了一个全新的网络框架- PanoFlow, 学习全色图像的光学流流。 为了克服全色变形中等方形图像的变形, 我们设计了一个超全域视野变形(FoVod), 我们设计了一个包含全局流变形(Fod-R) 或微角流变形变形图像的模型。 我们进一步审视了全色影像流变形图像的定义和特性, 并在此之后提出一种循环流变形变形变形( CFEFEFE) 方法, 利用星际图像的周期性变形应用方法, 以360 = = = 直径变形变形变形 光仪 流流和变形变形变形变形变形变形变现, 流和变形变形变形变形数据流和变现数据流, 以我们变现数据变现数据流 以现有变现数据变现数据流 以现有变现的变现数据流 以现有变现数据流 向为流 以现有变现为流 向为流 流 流 流 流 流 流 流 流 流 以现有数据法 以现有变现 流 流 流 流 以现有数据 以现有数据 流 流 流 流 流 流 流 流 流 流 流 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 以 流 流 以