In this paper, we introduce a novel non-linear activation function that spontaneously induces class-compactness and regularization in the embedding space of neural networks. The function is dubbed DOME for Difference Of Mirrored Exponential terms. The basic form of the function can replace the sigmoid or the hyperbolic tangent functions as an output activation function for binary classification problems. The function can also be extended to the case of multi-class classification, and used as an alternative to the standard softmax function. It can also be further generalized to take more flexible shapes suitable for intermediate layers of a network. In this version of the paper, we only introduce the concept. In a subsequent version, experimental evaluation will be added.
翻译:在本文中, 我们引入了一个新的非线性激活功能, 它自发在神经网络的嵌入空间中诱发类的相容性和正规化。 函数被称为 DOME, 用于镜像实验术语的差异 。 该函数的基本形式可以取代类像或双曲正切函数, 作为二进制分类问题的输出激活功能 。 该功能也可以扩展至多级分类, 并用作标准软体功能的替代。 还可以进一步推广到更灵活的形状, 以适合网络中间层 。 在本文的版本中, 我们只引入这个概念 。 在随后的版本中, 实验性评估将被添加 。