We propose a new high-performance activation function, Moderate Adaptive Linear Units (MoLU), for the deep neural network. The MoLU is a simple, beautiful and powerful activation function that can be a good main activation function among hundreds of activation functions. Because the MoLU is made up of the elementary functions, not only it is a infinite diffeomorphism (i.e. smooth and infinitely differentiable over whole domains), but also it decreases training time.
翻译:我们为深神经网络建议一个新的高性能激活功能,即中性适应线性线性单元(MoLU),这是一个简单、美丽和强大的激活功能,可以成为数百个激活功能中一个良好的主要激活功能。由于MOLU由基本功能组成,它不仅是一种无限的二异形(即,光滑和无限不同于整个域),而且还减少了培训时间。</s>