Multiview depth imagery will play a critical role in free-viewpoint television. This technology requires high quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery at different viewpoints is used to synthesize an arbitrary number of novel views. Usually, depth images at multiple viewpoints are estimated individually by stereo-matching algorithms, and hence, show lack of interview consistency. This inconsistency affects the quality of view synthesis negatively. This paper proposes a method for depth consistency testing in depth difference subspace to enhance the depth representation of a scene across multiple viewpoints. Furthermore, we propose a view synthesis algorithm that uses the obtained consistency information to improve the visual quality of virtual views at arbitrary viewpoints. Our method helps us to find a linear subspace for our depth difference measurements in which we can test the inter-view consistency efficiently. With this, our approach is able to enhance the depth information for real world scenes. In combination with our consistency-adaptive view synthesis, we improve the visual experience of the free-viewpoint user. The experiments show that our approach enhances the objective quality of virtual views by up to 1.4 dB. The advantage for the subjective quality is also demonstrated.
翻译:多视深度图像将在自由视点电视中发挥关键作用。 此技术需要高质量的虚拟图像合成, 使观众能够在动态现实世界的场景中自由移动。 不同观点的深度图像用于合成任意数量的新型观点。 通常, 多个观点的深度图像通过立体匹配算法单独估算, 从而显示访谈的一致性。 这种不一致性会影响视图合成的质量。 本文建议了一种在深度差异子空间进行深度一致性测试的方法, 以提高不同观点的深度。 此外, 我们提议一种观点合成算法, 使用所获得的一致性信息提高任意观点虚拟观点的视觉质量。 我们的方法帮助我们找到一个线性子空间, 用于测量我们深度差异的大小。 这样, 我们的方法能够提高真实世界场景的深度信息。 结合我们的一致性适应性合成, 我们改进了自由视点用户的视觉经验。 实验显示, 我们的方法通过1.4 dB 提高了虚拟观点的客观质量。 主观质量的优势也得到了体现 。