In neural networks literature, there is a strong interest in identifying and defining activation functions which can improve neural network performance. In recent years there has been a renovated interest of the scientific community in investigating activation functions which can be trained during the learning process, usually referred to as "trainable", "learnable" or "adaptable" activation functions. They appear to lead to better network performance. Diverse and heterogeneous models of trainable activation function have been proposed in the literature. In this paper, we present a survey of these models. Starting from a discussion on the use of the term "activation function" in literature, we propose a taxonomy of trainable activation functions, highlight common and distinctive proprieties of recent and past models, and discuss main advantages and limitations of this type of approach. We show that many of the proposed approaches are equivalent to adding neuron layers which use fixed (non-trainable) activation functions and some simple local rule that constraints the corresponding weight layers.
翻译:在神经网络文献中,人们非常关心确定和界定能够改善神经网络性能的激活功能,近年来科学界重新关注调查在学习过程中可以培训的激活功能,通常称为“可控制”、“可忽略”或“可调适”的激活功能,似乎可以提高网络性能。文献中提出了可训练激活功能的多样化和多样化模式。本文对这些模型进行了调查。从文献中使用“活动功能”一词的讨论中开始,我们建议对可训练的激活功能进行分类,突出最近和过去模式的共同和独特的特性,并讨论这类方法的主要优点和局限性。我们表明,许多拟议办法相当于增加神经层,这些神经层使用固定(不可控制)的激活功能和限制相应重量层的一些简单的地方规则。