With the success of neural volume rendering in novel view synthesis, neural implicit reconstruction with volume rendering has become popular. However, most methods optimize per-scene functions and are unable to generalize to novel scenes. We introduce VolRecon, a generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF). To reconstruct with fine details and little noise, we combine projection features, aggregated from multi-view features with a view transformer, and volume features interpolated from a coarse global feature volume. A ray transformer computes SRDF values of all the samples along a ray to estimate the surface location, which are used for volume rendering of color and depth. Extensive experiments on DTU and ETH3D demonstrate the effectiveness and generalization ability of our method. On DTU, our method outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable quality as MVSNet in full view reconstruction. Besides, our method shows good generalization ability on the large-scale ETH3D benchmark. Project page: https://fangjinhuawang.github.io/VolRecon.
翻译:随着神经体积的成功,在新观点合成中,神经内隐性重建与体积改造变得十分流行。然而,大多数方法优化了每层功能,无法对新场景进行概括化。我们引入了VolRecon,这是与Ray远程功能(SRDF)一起普遍采用的隐性重建方法。为了以细微的细节和微小的噪音进行重建,我们结合了预测特征,从多视特征和视图变压器进行汇总,以及从粗糙的全球特征量体积中进行体积转换。一个射线变异器计算了所有样本的SRDF值,并沿着一个射线来估计表层位置,用来对颜色和深度进行体积的显示。DTU和ETH3D的广泛实验显示了我们方法的有效性和一般化能力。在DTU方面,我们的方法比Smarse NeuS大面积的重建高出约30%,并取得了与MVSNet完全看重的类似质量。此外,我们的方法显示在大型ETH3D基准上具有很好的普及能力。项目网页:http://fangjinhuwangwang.githubio/VolRecon。