We revisit NPBG, the popular approach to novel view synthesis that introduced the ubiquitous point feature neural rendering paradigm. We are interested in particular in data-efficient learning with fast view synthesis. We achieve this through a view-dependent mesh-based denser point descriptor rasterization, in addition to a foreground/background scene rendering split, and an improved loss. By training solely on a single scene, we outperform NPBG, which has been trained on ScanNet and then scene finetuned. We also perform competitively with respect to the state-of-the-art method SVS, which has been trained on the full dataset (DTU and Tanks and Temples) and then scene finetuned, in spite of their deeper neural renderer.
翻译:我们重新审视广受欢迎的新观点合成方法NPBG, 即引入无处不在的点特征神经元化模式的新观点合成方法NPBG,我们特别关心数据效率高的学习和快速的视觉合成。我们通过一个以视觉为根据的网状密度点描述器分解,以及一个地表/地下场景分解,以及更好的损失来实现这一目标。我们仅仅通过在单一场景上的培训,就优于NPBG, 后者在ScanNet上接受了培训,然后对场景做了微调调整。我们还在最新的SVS方法上进行了竞争,该方法已经接受了全数据集(DTU和Tanks和Temples)的培训,然后对场景进行了微调,尽管它们具有更深的神经特性。