Deep networks are gradually penetrating almost every domain in our lives due to their amazing success. However, with substantive performance accuracy improvements comes the price of \emph{irreproducibility}. Two identical models, trained on the exact same training dataset may exhibit large differences in predictions on individual examples even when average accuracy is similar, especially when trained on highly distributed parallel systems. The popular Rectified Linear Unit (ReLU) activation has been key to recent success of deep networks. We demonstrate, however, that ReLU is also a catalyzer to irreproducibility in deep networks. We show that not only can activations smoother than ReLU provide better accuracy, but they can also provide better accuracy-reproducibility tradeoffs. We propose a new family of activations; Smooth ReLU (\emph{SmeLU}), designed to give such better tradeoffs, while also keeping the mathematical expression simple, and thus implementation cheap. SmeLU is monotonic, mimics ReLU, while providing continuous gradients, yielding better reproducibility. We generalize SmeLU to give even more flexibility and then demonstrate that SmeLU and its generalized form are special cases of a more general methodology of REctified Smooth Continuous Unit (RESCU) activations. Empirical results demonstrate the superior accuracy-reproducibility tradeoffs with smooth activations, SmeLU in particular.
翻译:深层次的网络由于其惊人的成功而逐渐渗透到我们生活中的几乎每一个领域。然而,随着实际性能精确度的提高,实际性能的精确度的提高将带来 \ emph{irreprecubility} 价格。两种相同的模型,在完全相同的培训数据集方面受过训练,即使在平均精确度相似的情况下,在个别例子的预测方面可能存在很大的差异,特别是在对高度分布的平行系统进行培训时。受欢迎的校正线条线条股(ReLU)的启动是最近深层次网络取得成功的关键。然而,我们表明,RELU也是在深度网络中不可再复制的催化剂。我们表明,不仅能够激活比RELU更平稳,而且它们也能提供更好的准确性,而且能够提供更好的准确性。我们提议了一个新的激活组合;平滑的RELU(emph{SmeLU),目的是提供更好的数学表达简单化,从而降低执行成本。SmeLU的单一性、模拟ReLU,同时提供连续的梯度,从而实现更好的再降级。我们普遍化的Smeressimalalalalal化的SU,甚至展示了Smlividudual U的弹性的系统。