We identify and formalize a fundamental gradient descent phenomenon resulting in a learning proclivity in over-parameterized neural networks. Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task, despite the presence of other predictive features that fail to be discovered. This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks. Using tools from Dynamical Systems theory, we identify simple properties of learning dynamics during gradient descent that lead to this imbalance, and prove that such a situation can be expected given certain statistical structure in training data. Based on our proposed formalism, we develop guarantees for a novel regularization method aimed at decoupling feature learning dynamics, improving accuracy and robustness in cases hindered by gradient starvation. We illustrate our findings with simple and real-world out-of-distribution (OOD) generalization experiments.
翻译:我们确定并正式确定一个基本的梯度下降现象,导致过度隔离神经网络的学习倾向性。当通过只捕捉与任务相关的一系列特征,从而最大限度地减少交叉热带损失时,就会出现渐渐变星系现象,尽管还存在其他未发现的预测特征。这项工作从理论上解释了神经网络中出现这种特征不平衡现象的原因。我们利用动态系统理论的工具,确定导致这种不平衡的梯度下降期间学习动态的简单特性,并证明鉴于培训数据中的某些统计结构,这种情况是可以预期的。我们根据提议的形式主义,为旨在分离特征学习动态的新型正规化方法提供保障,提高受梯度饥荒阻碍的案例的准确性和稳健性。我们用简单和现实世界外分配(OOD)一般化实验来说明我们的调查结果。