Normal estimation for unstructured point clouds is an important task in 3D computer vision. Current methods achieve encouraging results by mapping local patches to normal vectors or learning local surface fitting using neural networks. However, these methods are not generalized well to unseen scenarios and are sensitive to parameter settings. To resolve these issues, we propose an implicit function to learn an angle field around the normal of each point in the spherical coordinate system, which is dubbed as Neural Angle Fields (NeAF). Instead of directly predicting the normal of an input point, we predict the angle offset between the ground truth normal and a randomly sampled query normal. This strategy pushes the network to observe more diverse samples, which leads to higher prediction accuracy in a more robust manner. To predict normals from the learned angle fields at inference time, we randomly sample query vectors in a unit spherical space and take the vectors with minimal angle values as the predicted normals. To further leverage the prior learned by NeAF, we propose to refine the predicted normal vectors by minimizing the angle offsets. The experimental results with synthetic data and real scans show significant improvements over the state-of-the-art under widely used benchmarks.
翻译:对非结构化点云的正常估计是 3D 计算机视野中的一项重要任务 。 目前的方法通过对普通矢量的本地补丁进行绘图或学习使用神经网络对本地表面进行校正来取得令人鼓舞的结果 。 但是, 这些方法并不普遍, 并且对不可见的场景并不十分敏感 。 为了解决这些问题, 我们提议了一个隐含的功能, 在球形坐标系统中的每个点的正常周围学习一个角度字段, 这个系统被称为神经角场( NEAF ) 。 我们不直接预测输入点的正常值, 而是预测地面真相正常数和随机抽样查询正常之间的角偏移。 这个策略促使网络观察更多样化的样本, 从而以更稳健的方式导致更准确的预测。 为了在推断时从所学的角场中预测正常, 我们随机地在单位球形空间对矢量进行查询, 并将最小角度值的矢量作为预测的正常值。 为了进一步利用 NEAF 先前学到的值, 我们提议通过尽量减少角度偏移来改进预测的正常矢量, 。 实验结果以合成数据和实际扫描显示在所使用的状态基准下的重大改进。