Activation functions play critical roles in neural networks, yet current off-the-shelf neural networks pay little attention to the specific choice of activation functions used. Here we show that data-aware customization of activation functions can result in striking reductions in neural network error. We first give a simple linear algebraic explanation of the role of activation functions in neural networks; then, through connection with the Diaconis-Shahshahani Approximation Theorem, we propose a set of criteria for good activation functions. As a case study, we consider regression tasks with a partially exchangeable target function, \emph{i.e.} $f(u,v,w)=f(v,u,w)$ for $u,v\in \mathbb{R}^d$ and $w\in \mathbb{R}^k$, and prove that for such a target function, using an even activation function in at least one of the layers guarantees that the prediction preserves partial exchangeability for best performance. Since even activation functions are seldom used in practice, we designed the ``seagull'' even activation function $\log(1+x^2)$ according to our criteria. Empirical testing on over two dozen 9-25 dimensional examples with different local smoothness, curvature, and degree of exchangeability revealed that a simple substitution with the ``seagull'' activation function in an already-refined neural network can lead to an order-of-magnitude reduction in error. This improvement was most pronounced when the activation function substitution was applied to the layer in which the exchangeable variables are connected for the first time. While the improvement is greatest for low-dimensional data, experiments on the CIFAR10 image classification dataset showed that use of ``seagull'' can reduce error even for high-dimensional cases. These results collectively highlight the potential of customizing activation functions as a general approach to improve neural network performance.
翻译:激活功能在神经网络中扮演着关键角色, 然而当前的超现神经网络却很少关注所使用的激活功能的具体选择。 在这里我们显示, 激活功能的数据自定义可以导致神经网络错误的显著减少。 我们首先对神经网络中激活功能的作用给出简单的线性代数解释; 然后, 通过连接 Diaconis- Shahshahanani Apporomation Theorem, 我们为良好的激活功能提出了一套标准。 作为案例研究, 我们考虑使用部分可交换目标功能的回归任务, 也就是 emph{ i.e.} $f( u, v, w) = f( v, u, w) = f( v, u, w) 导致神经网络错误的显著变换代数 。 在正常变现中, 将自动变现的变现性变现功能 用于普通的变现, 也可以用至少一个更亮的变现功能来保证最佳性能的预变换。 由于激活功能在实践中很少应用, 我们使用最易的变现功能, 以更稳定的变现 IM 的变现 IM IM 的变现 IM 值 值 的 值 值 值 值 值 值 。