Coordinate-MLPs are emerging as an effective tool for modeling multidimensional continuous signals, overcoming many drawbacks associated with discrete grid-based approximations. However, coordinate-MLPs with ReLU activations, in their rudimentary form, demonstrate poor performance in representing signals with high fidelity, promoting the need for positional embedding layers. Recently, Sitzmann et al. proposed a sinusoidal activation function that has the capacity to omit positional embedding from coordinate-MLPs while still preserving high signal fidelity. Despite its potential, ReLUs are still dominating the space of coordinate-MLPs; we speculate that this is due to the hyper-sensitivity of networks -- that employ such sinusoidal activations -- to the initialization schemes. In this paper, we attempt to broaden the current understanding of the effect of activations in coordinate-MLPs, and show that there exists a broader class of activations that are suitable for encoding signals. We affirm that sinusoidal activations are only a single example in this class, and propose several non-periodic functions that empirically demonstrate more robust performance against random initializations than sinusoids. Finally, we advocate for a shift towards coordinate-MLPs that employ these non-traditional activation functions due to their high performance and simplicity.
翻译:协调- MLP 正在成为模拟多层面连续信号的有效工具, 克服了与离散网基近似相关的许多缺点。 然而, 协调- MLP 与RELU 启动, 其基本形式在表现高度忠诚的信号方面表现不佳, 促进了定位嵌入层的需要。 最近, Sitzmann et al. 提议了一个螺旋型激活功能, 能够忽略协调- MLP 的定位嵌入, 同时仍然保持高度的信号忠诚。 尽管它具有潜力, ReLU 仍然控制着协调- MLP 的空间; 我们推测, 这是因为网络的超敏感度 -- -- 使用类类象素启动 -- -- 导致初始化计划。 在本文中, 我们试图扩大当前对协调- MLP 激活效应的理解, 并表明, 一种更适合编码信号的更宽泛的激活类型。 我们申明, 类内激活只是这一类中的单一例子, 并提议若干非周期性功能, 以实证方式显示, 网络的高度敏感性激活 -- -- -- 使用这种类比常规化的特性最终更强有力地调整性性性性功能, 使我们最终采用非常规性升级性性运行。