We consider the problem of two-view matching under significant viewpoint changes with view synthesis. We propose two novel methods, minimizing the view synthesis overhead. The first one, named DenseAffNet, uses dense affine shapes estimates from AffNet, which allows it to partition the image, rectifying each partition with just a single affine map. The second one, named DepthAffNet, combines information from depth maps and affine shapes estimates to produce different sets of rectifying affine maps for different image partitions. DenseAffNet is faster than the state-of-the-art and more accurate on generic scenes. DepthAffNet is on par with the state of the art on scenes containing large planes. The evaluation is performed on 3 public datasets - EVD Dataset, Strong ViewPoint Changes Dataset and IMC Phototourism Dataset.
翻译:我们考虑两景配对问题,在显著的视野变化下进行综合合成。 我们提出两种新颖的方法, 尽量减少视图合成管理。 第一个名为 DenseAffNet, 使用AffNet 的密密方形估计, 使AffNet 能够分割图像, 仅用单一的方形地图对每个分区进行校正。 第二个名为TripAffNet, 将深度地图和方形估计中的信息结合起来, 为不同图像分区制作不同的校正方形地图。 DenseAffNet 比最先进的快, 在普通场景上更准确。 深度AffNet 与包含大平面的场景的艺术状态相当。 评估在3个公共数据集 - EVD 数据集、 强视点变化数据集和 IMC 照片旅游数据集上进行。