FlowFormer introduces a transformer architecture into optical flow estimation and achieves state-of-the-art performance. The core component of FlowFormer is the transformer-based cost-volume encoder. Inspired by the recent success of masked autoencoding (MAE) pretraining in unleashing transformers' capacity of encoding visual representation, we propose Masked Cost Volume Autoencoding (MCVA) to enhance FlowFormer by pretraining the cost-volume encoder with a novel MAE scheme. Firstly, we introduce a block-sharing masking strategy to prevent masked information leakage, as the cost maps of neighboring source pixels are highly correlated. Secondly, we propose a novel pre-text reconstruction task, which encourages the cost-volume encoder to aggregate long-range information and ensures pretraining-finetuning consistency. We also show how to modify the FlowFormer architecture to accommodate masks during pretraining. Pretrained with MCVA, FlowFormer++ ranks 1st among published methods on both Sintel and KITTI-2015 benchmarks. Specifically, FlowFormer++ achieves 1.07 and 1.94 average end-point error (AEPE) on the clean and final pass of Sintel benchmark, leading to 7.76\% and 7.18\% error reductions from FlowFormer. FlowFormer++ obtains 4.52 F1-all on the KITTI-2015 test set, improving FlowFormer by 0.16.
翻译:Flowmer公司的核心组成部分是以变压器为基础的成本量编码器。第二,我们提出了一项新的前文本重建任务,鼓励成本量编码器到综合长程信息,并确保培训前调整的一致性。我们还提议如何修改Flowformer公司结构以适应预培训期间的遮罩。在使用新的MAE计划对成本量编码器进行预先培训之前,Flower Forencod(MCVA)将Flower Former ++ 列第1级,在Sintel和KITTI 2015基准上公布的方法之一。具体地说,FlookFormer+BAR 达到7.07和1.94Flook+Recker的1.07和1.94 Flook-Reckral-Recker, Flock-Reckeral-Reckeral-Reckeral-Reck.AAEA 76 和FFF-BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_第7.18I_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_AAAAAAAAAAAA_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_18_18_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_18_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_BAR_18I_18I_18I_18I_18I)</s>