Continuous neural representations have recently emerged as a powerful and flexible alternative to classical discretized representations of signals. However, training them to capture fine details in multi-scale signals is difficult and computationally expensive. Here we propose random weight factorization as a simple drop-in replacement for parameterizing and initializing conventional linear layers in coordinate-based multi-layer perceptrons (MLPs) that significantly accelerates and improves their training. We show how this factorization alters the underlying loss landscape and effectively enables each neuron in the network to learn using its own self-adaptive learning rate. This not only helps with mitigating spectral bias, but also allows networks to quickly recover from poor initializations and reach better local minima. We demonstrate how random weight factorization can be leveraged to improve the training of neural representations on a variety of tasks, including image regression, shape representation, computed tomography, inverse rendering, solving partial differential equations, and learning operators between function spaces.
翻译:连续的神经表现最近成为传统分散的信号表达方式的强大和灵活的替代物。 但是,培训它们来捕捉多尺度信号中的精细细节是很困难的,而且计算成本很高。 我们在这里建议随机加权化,作为在基于协调的多层感应器(MLPs)中为常规线性层进行参数化和初始化的简单滴入替代,从而大大加快和改进其培训。 我们展示了这种因子化如何改变潜在的损失场景,并有效地使网络中的每个神经元能够使用自己的自适应学习速度学习学习。这不仅有助于减轻光谱偏差,而且使网络能够从初始化不良的初始化中快速恢复并到达更好的本地微型。 我们展示了如何利用随机加权化来改进对各种任务神经性表现的培训,包括图像回归、形状代表、计算成像学、反演算、反演化、解决部分差异方程式以及功能空间之间的学习操作器。