While a lot of progress has been made in recent years, the dynamics of learning in deep nonlinear neural networks remain to this day largely misunderstood. In this work, we study the case of binary classification and prove various properties of learning in such networks under strong assumptions such as linear separability of the data. Extending existing results from the linear case, we confirm empirical observations by proving that the classification error also follows a sigmoidal shape in nonlinear architectures. We show that given proper initialization, learning expounds parallel independent modes and that certain regions of parameter space might lead to failed training. We also demonstrate that input norm and features' frequency in the dataset lead to distinct convergence speeds which might shed some light on the generalization capabilities of deep neural networks. We provide a comparison between the dynamics of learning with cross-entropy and hinge losses, which could prove useful to understand recent progress in the training of generative adversarial networks. Finally, we identify a phenomenon that we baptize gradient starvation where the most frequent features in a dataset prevent the learning of other less frequent but equally informative features.
翻译:尽管近年来取得了许多进展,但深非线性神经网络的学习动态直到今天仍然大为被误解。在这项工作中,我们研究了二进制分类案例,并在数据线性分离等强有力的假设下,证明了这类网络中学习的各种特性。扩展线性案例的现有结果,我们通过证明分类错误也遵循非线性结构中的硅状形状来确认实证意见。我们证明,在适当初始化的情况下,学习平行的独立模式,以及某些参数空间区域可能导致培训失败。我们还表明,数据集中输入的规范和特征的频率导致不同的趋同速度,这可能会对深线性神经网络的普遍化能力产生某种了解。我们比较了与跨作物和临界损失的学习动态,这可能有助于了解基因化对抗性网络培训的最新进展。最后,我们发现一种现象,即我们用来洗刷梯度饥饿现象,因为一个数据集中最常见的特征使得学习其他不那么频繁但同样信息丰富的特征无法学习。