In this paper, we address the problem of multi-view 3D shape reconstruction. While recent differentiable rendering approaches associated to implicit shape representations have provided breakthrough performance, they are still computationally heavy and often lack precision on the estimated geometries. To overcome these limitations we investigate a new computational approach that builds on a novel shape representation that is volumetric, as in recent differentiable rendering approaches, but parameterized with depth maps to better materialize the shape surface. The shape energy associated to this representation evaluates 3D geometry given color images and does not need appearance prediction but still benefits from volumetric integration when optimized. In practice we propose an implicit shape representation, the SRDF, based on signed distances which we parameterize by depths along camera rays. The associated shape energy considers the agreement between depth prediction consistency and photometric consistency, this at 3D locations within the volumetric representation. Various photo-consistency priors can be accounted for such as a median based baseline, or a more elaborated criterion as with a learned function. The approach retains pixel-accuracy with depth maps and is parallelizable. Our experiments over standard datasets shows that it provides state-of-the-art results with respect to recent approaches with implicit shape representations as well as with respect to traditional multi-view stereo methods.
翻译:在本文中,我们处理多视图 3D 形状重建问题。 虽然最近与隐含形状表示法相关的不同转换方法提供了突破性表现,但它们仍然在计算上很重,而且往往缺乏估计的几何的精确度。为了克服这些局限性,我们调查了一种新的计算方法,该方法建立在一种新型形状表示法上,即体积,如最近的可变转换方法,但以深度地图为参数进行参数,以更好地实现形状表面。与该表示法相关的形状能量评估了3D 几何颜色图像,不需要外观预测,但在优化时仍受益于体积整合。在实践中,我们提议一种隐含的形状表示法,即SRDF,以我们用摄像镜头的深度来标定的距离为基础。相关的能量考虑到深度预测一致性和光度一致性之间的一致,这在体积代表法中的3D位置。各种相容性前期可以算出中位基准基线,或更详尽的标准功能。该方法保留了深度地图的精确度,并且可以与深度地图相平行的体积整合。我们实验时的SSRDF,与传统形态表示法与最近的假设法相比,显示,显示了一种不偏重的模型,提供了状态。