This paper aims to address a new task of image morphing under a multiview setting, which takes two sets of multiview images as the input and generates intermediate renderings that not only exhibit smooth transitions between the two input sets but also ensure visual consistency across different views at any transition state. To achieve this goal, we propose a novel approach called Multiview Regenerative Morphing that formulates the morphing process as an optimization to solve for rigid transformation and optimal-transport interpolation. Given the multiview input images of the source and target scenes, we first learn a volumetric representation that models the geometry and appearance for each scene to enable the rendering of novel views. Then, the morphing between the two scenes is obtained by solving optimal transport between the two volumetric representations in Wasserstein metrics. Our approach does not rely on user-specified correspondences or 2D/3D input meshes, and we do not assume any predefined categories of the source and target scenes. The proposed view-consistent interpolation scheme directly works on multiview images to yield a novel and visually plausible effect of multiview free-form morphing.
翻译:本文旨在处理在多视图设置下变形图像的新任务, 将两组多视图图像作为输入, 并生成中间图像, 这些图像不仅显示两个输入组之间的平稳过渡, 而且确保任何转型期国家不同观点之间的视觉一致性 。 为了实现这一目标, 我们提议了一种叫作多视图调整过程的新颖方法, 将变形过程构建为最优化的解决方案, 以便解决僵硬转换和最佳运输内插。 鉴于源和目标场景的多视图输入图像, 我们首先学习一个量子表示, 来模拟每个场景的几何和外观, 以便提供新的视图观点。 然后, 通过解决瓦列斯特里斯坦 矩阵中两种体积表达方式之间的最佳迁移, 来获得两个场景之间的变形。 我们的方法并不依赖于用户指定的对应或 2D/3D 输入中模, 我们并不假设源和目标场景的任何预定义的类别 。 拟议的视觉内插图计划直接对多视图图像进行模拟, 以产生多视图自由形状的近似效果 。