Batch normalization is currently the most widely used variant of internal normalization for deep neural networks. Additional work has shown that the normalization of weights and additional conditioning as well as the normalization of gradients further improve the generalization. In this work, we combine several of these methods and thereby increase the generalization of the networks. The advantage of the newer methods compared to the batch normalization is not only increased generalization, but also that these methods only have to be applied during training and, therefore, do not influence the running time during use. Link to CUDA code https://atreus.informatik.uni-tuebingen.de/seafile/d/8e2ab8c3fdd444e1a135/
翻译:目前,对深神经网络而言,批量正常化是目前最广泛使用的内部正常化的变体,其他工作表明,权重正常化和附加条件以及梯度正常化进一步改善了普遍化,在这项工作中,我们结合了其中若干种方法,从而扩大了网络的普遍化,较新的方法与批量正常化相比的优势不仅在于提高了普遍化,而且这些方法仅在培训期间使用,因此不影响使用期间的运行时间。