Batch normalization is widely used in deep learning to normalize intermediate activations. Deep networks suffer from notoriously increased training complexity, mandating careful initialization of weights, requiring lower learning rates, etc. These issues have been addressed by Batch Normalization (\textbf{BN}), by normalizing the inputs of activations to zero mean and unit standard deviation. Making this batch normalization part of the training process dramatically accelerates the training process of very deep networks. A new field of research has been going on to examine the exact theoretical explanation behind the success of \textbf{BN}. Most of these theoretical insights attempt to explain the benefits of \textbf{BN} by placing them on its influence on optimization, weight scale invariance, and regularization. Despite \textbf{BN} undeniable success in accelerating generalization, the gap of analytically relating the effect of \textbf{BN} to the regularization parameter is still missing. This paper aims to bring out the data-dependent auto-tuning of the regularization parameter by \textbf{BN} with analytical proofs. We have posed \textbf{BN} as a constrained optimization imposed on non-\textbf{BN} weights through which we demonstrate its data statistics dependant auto-tuning of regularization parameter. We have also given analytical proof for its behavior under a noisy input scenario, which reveals the signal vs. noise tuning of the regularization parameter. We have also substantiated our claim with empirical results from the MNIST dataset experiments.
翻译:深层网络在深层学习中广泛使用批量正常化使中间启动正常化。 深层网络由于培训复杂性明显增加而受害, 要求仔细初始化重量, 要求降低学习率等等。 这些问题已经通过批量正常化(\ textbf{BN}) 得到解决, 启动输入的正常化(\ textbf{BN}) 达到零平均值和单位标准偏差。 使培训进程的批量正常化部分大大加快了非常深层网络的培训进程。 一个新的研究领域正在继续研究\ textbf{BN} 成功成功背后的精确理论解释。 这些理论洞察力大多试图解释\ textbf{BN} 的好处, 把它们置于对优化、 重量变异度和正规化的影响上。 尽管在加速普遍化方面取得了不可否认的成功, 使培训过程的正常化部分大大加快了非常深层网络的培训进程。 本文旨在用分析证据来解释基于数据自动调整参数的自动调整参数。 我们从IMLTextb{BN} 也提出了信号性实验性实验性实验性模型, 我们通过不精确的正常化的正常化的模型展示了它的分析性数据。