We present DeepSurfels, a novel hybrid scene representation for geometry and appearance information. DeepSurfels combines explicit and neural building blocks to jointly encode geometry and appearance information. In contrast to established representations, DeepSurfels better represents high-frequency textures, is well-suited for online updates of appearance information, and can be easily combined with machine learning methods. We further present an end-to-end trainable online appearance fusion pipeline that fuses information from RGB images into the proposed scene representation and is trained using self-supervision imposed by the reprojection error with respect to the input images. Our method compares favorably to classical texture mapping approaches as well as recent learning-based techniques. Moreover, we demonstrate lower runtime, im-proved generalization capabilities, and better scalability to larger scenes compared to existing methods.
翻译:我们展示了DeepSurfels, 这是关于几何和外观信息的新型混合场景演示。 DeepSurfels将清晰和神经构件结合到共同编码几何和外观信息中。 与既定的演示相比, DeepSurfels更好地代表了高频纹理,适合在线更新外观信息,并且可以很容易地与机器学习方法相结合。 我们还展示了一种端到端可训练的在线外观聚合管道,它将RGB图像中的信息结合到拟议的场面演示中,并且通过对输入图像的重新预测错误所强加的自我监督来进行培训。 我们的方法比典型的质谱绘图方法以及最近的学习技术要好。 此外,我们展示了较低的运行时间、不可靠的一般性能力以及比现有方法更适合更大范围的场景。