Understanding the dynamics of feature learning in neural networks (NNs) remains a significant challenge. The work of (Mousavi-Hosseini et al., 2023) analyzes a multiple index teacher-student setting and shows that a two-layer student attains a low-rank structure in its first-layer weights when trained with stochastic gradient descent (SGD) and a strong regularizer. This structural property is known to reduce sample complexity of generalization. Indeed, in a second step, the same authors establish algorithm-specific learning guarantees under additional assumptions. In this paper, we focus exclusively on the structure discovery aspect and study it under weaker assumptions, more specifically: we allow (a) NNs of arbitrary size and depth, (b) with all parameters trainable, (c) under any smooth loss function, (d) tiny regularization, and (e) trained by any method that attains a second-order stationary point (SOSP), e.g.\ perturbed gradient descent (PGD). At the core of our approach is a key $\textit{derandomization}$ lemma, which states that optimizing the function $\mathbb{E}_{\mathbf{x}} \left[g_{\theta}(\mathbf{W}\mathbf{x} + \mathbf{b})\right]$ converges to a point where $\mathbf{W} = \mathbf{0}$, under mild conditions. The fundamental nature of this lemma directly explains structure discovery and has immediate applications in other domains including an end-to-end approximation for MAXCUT, and computing Johnson-Lindenstrauss embeddings.
翻译:理解神经网络中的特征学习动态仍然是一个重大挑战。(Mousavi-Hosseini等人,2023)的工作分析了一个多索引师生设置,并证明当使用随机梯度下降和强正则化器训练时,一个两层学生网络在其第一层权重中获得了低秩结构。已知这种结构特性可以降低泛化的样本复杂度。事实上,在第二步中,同一作者在额外假设下建立了算法特定的学习保证。在本文中,我们专注于结构发现方面,并在更弱的假设下进行研究,具体而言:我们允许(a)任意大小和深度的神经网络,(b)所有参数可训练,(c)在任何平滑损失函数下,(d)极小的正则化,以及(e)通过任何能达到二阶稳定点的方法进行训练,例如扰动梯度下降。我们方法的核心是一个关键的$\textit{反随机化}$引理,该引理指出,在温和条件下,优化函数$\mathbb{E}_{\mathbf{x}} \left[g_{\theta}(\mathbf{W}\mathbf{x} + \mathbf{b})\right]$会收敛到$\mathbf{W} = \mathbf{0}$的点。这一引理的基本性质直接解释了结构发现,并立即在其他领域中得到应用,包括MAXCUT的端到端近似和计算Johnson-Lindenstrauss嵌入。