We present a unified formulation and model for three motion and 3D perception tasks: optical flow, rectified stereo matching and unrectified stereo depth estimation from posed images. Unlike previous specialized architectures for each specific task, we formulate all three tasks as a unified dense correspondence matching problem, which can be solved with a single model by directly comparing feature similarities. Such a formulation calls for discriminative feature representations, which we achieve using a Transformer, in particular the cross-attention mechanism. We demonstrate that cross-attention enables integration of knowledge from another image via cross-view interactions, which greatly improves the quality of the extracted features. Our unified model naturally enables cross-task transfer since the model architecture and parameters are shared across tasks. We outperform RAFT with our unified model on the challenging Sintel dataset, and our final model that uses a few additional task-specific refinement steps outperforms or compares favorably to recent state-of-the-art methods on 10 popular flow, stereo and depth datasets, while being simpler and more efficient in terms of model design and inference speed.
翻译:我们为三种运动和3D感知任务提出了一个统一的配方和模型:光学流、校正立体匹配和从图像中未校正的立体深度估计。与以往的每个具体任务的专门结构不同,我们将所有三项任务都设计成一个统一的密集对应对应问题,可以通过直接比较特征相似性来用单一模型加以解决。这种配方要求有区别性特征的表示,我们使用变异器,特别是交叉感知机制来实现这一点。我们证明交叉感应通过交叉感应互动将来自另一图像的知识整合起来,从而大大改进所提取的特征的质量。我们的统一型号自然使跨任务转移成为可能,因为模型结构和参数是跨任务共有的。我们用我们在具有挑战性的Sintel数据集上的统一模型超越了RAFT系统,而我们的最后模型则使用少数额外的特定任务改进步骤,超越或优于10个大众流、立体和深度数据集的最新状态方法,同时在模型设计和推断速度方面更加简单和高效。