Neural networks that map 3D coordinates to signed distance function (SDF) or occupancy values have enabled high-fidelity implicit representations of object shape. This paper develops a new shape model that allows synthesizing novel distance views by optimizing a continuous signed directional distance function (SDDF). Similar to deep SDF models, our SDDF formulation can represent whole categories of shapes and complete or interpolate across shapes from partial input data. Unlike an SDF, which measures distance to the nearest surface in any direction, an SDDF measures distance in a given direction. This allows training an SDDF model without 3D shape supervision, using only distance measurements, readily available from depth camera or Lidar sensors. Our model also removes post-processing steps like surface extraction or rendering by directly predicting distance at arbitrary locations and viewing directions. Unlike deep view-synthesis techniques, such as Neural Radiance Fields, which train high-capacity black-box models, our model encodes by construction the property that SDDF values decrease linearly along the viewing direction. This structure constraint not only results in dimensionality reduction but also provides analytical confidence about the accuracy of SDDF predictions, regardless of the distance to the object surface.
翻译:映射 3D 坐标 3D 坐标的神经网络, 映射 3D 坐标, 以签名的距离函数( SDF) 或占用值, 使得能够对对象形状进行高不全度隐含的显示。 本文开发了一种新的形状模型, 通过优化连续签名的方向距离函数( SDF), 能够将新的远程视图合成。 与深层 SDF 模型类似, 我们的SDDF 配制可以代表整类形状, 完整或内插从部分输入数据中跨形状。 不像 SDF 那样, SDF 测量到任何方向最近的表面, SDDF 测量到某个方向的距离。 这样可以对SDDF 形状模型进行培训, 仅使用从深度摄像头或 Lidar 传感器可以轻易获得的距离测量,, 从而可以将新显示的形状显示为三DF 3D 形状的外观 。 我们的外观的外观特性, 这种结构制约不仅导致空间减少, 并且提供对SDF 地面的准确性的分析, 。