Multilayer perceptrons (MLPs) learn high frequencies slowly. Recent approaches encode features in spatial bins to improve speed of learning details, but at the cost of larger model size and loss of continuity. Instead, we propose to encode features in bins of Fourier features that are commonly used for positional encoding. We call these Quantized Fourier Features (QFF). As a naturally multiresolution and periodic representation, our experiments show that using QFF can result in smaller model size, faster training, and better quality outputs for several applications, including Neural Image Representations (NIR), Neural Radiance Field (NeRF) and Signed Distance Function (SDF) modeling. QFF are easy to code, fast to compute, and serve as a simple drop-in addition to many neural field representations.
翻译:多层透视器(MLPs) 慢慢地学习高频率。 最近的方法将空间文件箱中的功能编码, 以提高学习细节的速度, 但要以更大的模型大小和连续性损失为代价。 相反, 我们提议将通常用于定位编码的 Fourier 特征箱中的特性编码。 我们称这些量化的 Fourier 特征( QFF ) 。 作为自然的多分辨率和定期表达, 我们的实验显示, 使用 QFF 可以导致包括神经图像显示器( NIR )、 神经辐射场( NERF ) 和 National 远程函数( SDF ) 模型在内的若干应用程序的较小模型规模、 更快的培训和更好的质量输出。 QFF 很容易编码、 快速计算, 并且可以作为许多神经外观外观外观的简单投影。