Optical flow estimation is a basic task in self-driving and robotics systems, which enables to temporally interpret traffic scenes. Autonomous vehicles clearly benefit from the ultra-wide Field of View (FoV) offered by 360{\deg} panoramic sensors. However, due to the unique imaging process of panoramic cameras, models designed for pinhole images do not directly generalize satisfactorily to 360{\deg} panoramic images. In this paper, we put forward a novel network framework--PanoFlow, to learn optical flow for panoramic images. To overcome the distortions introduced by equirectangular projection in panoramic transformation, we design a Flow Distortion Augmentation (FDA) method, which contains radial flow distortion (FDA-R) or equirectangular flow distortion (FDA-E). We further look into the definition and properties of cyclic optical flow for panoramic videos, and hereby propose a Cyclic Flow Estimation (CFE) method by leveraging the cyclicity of spherical images to infer 360{\deg} optical flow and converting large displacement to relatively small displacement. PanoFlow is applicable to any existing flow estimation method and benefits from the progress of narrow-FoV flow estimation. In addition, we create and release a synthetic panoramic dataset FlowScape based on CARLA to facilitate training and quantitative analysis. PanoFlow achieves state-of-the-art performance on the public OmniFlowNet and the established FlowScape benchmarks. Our proposed approach reduces the End-Point-Error (EPE) on FlowScape by 27.3%. On OmniFlowNet, PanoFlow achieves a 55.5% error reduction from the best published result. We also qualitatively validate our method via a collection vehicle and a public real-world OmniPhotos dataset, indicating strong potential and robustness for real-world navigation applications. Code and dataset are publicly available at https://github.com/MasterHow/PanoFlow.
翻译:光流估算是自我驱动和机器人系统的一项基本任务,它能够对交通场景进行时间性解释。 自动车辆显然受益于360xdeg}全色传感器提供的超全域视图(FoV) 。 但是,由于全色相机的独特成像过程, 设计用于针孔图像的模型不能直接令人满意地概括到360xdeg}全色图像。 在本文中, 我们提出了一个全新的网络框架- PanoFllow, 学习全色图像的光学流。 为了克服全色变异中半方位投影带来的扭曲, 我们设计了一个流程变异(FoVoVoVo) 方法。 我们进一步审视了全色视频的环球光学流的定义和特性。 我们的流流流流流(Oral-Forlor) 方法, 通过利用拟议的小版图像的环球变形图, 将Ormal-Flormal-Focal 数据变现, 将Oral-ral- disal 转换为我们当前数据流流流流流流流流流, 和流流流流流流流到流流(Oral-Oral-Oral- 流) 将我们现有数据变为流流流流流流流流流流流流流流流流流流流流流流流流流流数据到流流流,, 以现有数据流流流流流流数据流数据流, 以现有数据流为流流流为流流流流流流流为流流流流流为流流流流流流流流为流为流流数据流流数据流数据流数据流为流流流流为流数据流为流流流为流流流流为流为流流流数据流, 。