We present a method to automatically compute correct gradients with respect to geometric scene parameters in neural SDF renderers. Recent physically-based differentiable rendering techniques for meshes have used edge-sampling to handle discontinuities, particularly at object silhouettes, but SDFs do not have a simple parametric form amenable to sampling. Instead, our approach builds on area-sampling techniques and develops a continuous warping function for SDFs to account for these discontinuities. Our method leverages the distance to surface encoded in an SDF and uses quadrature on sphere tracer points to compute this warping function. We further show that this can be done by subsampling the points to make the method tractable for neural SDFs. Our differentiable renderer can be used to optimize neural shapes from multi-view images and produces comparable 3D reconstructions to recent SDF-based inverse rendering methods, without the need for 2D segmentation masks to guide the geometry optimization and no volumetric approximations to the geometry.
翻译:我们提出了一个方法来自动计算神经 SDF 转换器中几何场景参数的校正梯度。 最近的基于物理的可辨别技术为 meshes 使用了边缘取样方法来处理不连续问题, 特别是在对象光影上, 但是 SDF 没有简单的参数形式可以进行取样。 相反, 我们的方法以区域取样技术为基础, 为 SDF 开发了一个连续的扭曲功能来计算这些不连续性。 我们的方法利用SDF 编码的表面的距离, 并在球体跟踪点上使用等离值来计算这个旋转函数。 我们进一步显示, 可以通过对点进行子取样来使神经SDFs 具有可辨别的方法。 我们的可辨别成像可以用来优化多视图像的神经形状, 并产生与最近基于 SDF 的反演算法相近似的3D 重建功能。 我们的方法不需要 2D 分解面遮罩来指导几何测量优化, 和不向几何 。