In this paper, we introduce a novel non-linear activation function that spontaneously induces class-compactness and regularization in the embedding space of neural networks. The function is dubbed DOME for Difference Of Mirrored Exponential terms. The basic form of the function can replace the sigmoid or the hyperbolic tangent functions as an output activation function for binary classification problems. The function can also be extended to the case of multi-class classification, and used as an alternative to the standard softmax function. It can also be further generalized to take more flexible shapes suitable for intermediate layers of a network. We empirically demonstrate the properties of the function. We also show that models using the function exhibit extra robustness against adversarial attacks.
翻译:在本文中, 我们引入了一个新的非线性激活功能, 它自发地在神经网络嵌入的空间中诱发类的兼容性和正规化。 函数被称为 DOME, 用于镜像实验术语的差异 。 该函数的基本形式可以取代类像或双曲正切函数, 作为二进制分类问题的输出激活功能 。 该功能也可以扩展至多级分类, 并用作标准软体功能的替代。 还可以进一步推广到更灵活的形状, 以适合网络中间层 。 我们从经验上演示该函数的特性 。 我们还显示, 使用该功能的模型在对抗性攻击时表现出了超强的强力性 。