We study the dynamics and implicit bias of gradient flow (GF) on univariate ReLU neural networks with a single hidden layer in a binary classification setting. We show that when the labels are determined by the sign of a target network with $r$ neurons, with high probability over the initialization of the network and the sampling of the dataset, GF converges in direction (suitably defined) to a network achieving perfect training accuracy and having at most $\mathcal{O}(r)$ linear regions, implying a generalization bound. Unlike many other results in the literature, under an additional assumption on the distribution of the data, our result holds even for mild over-parameterization, where the width is $\tilde{\mathcal{O}}(r)$ and independent of the sample size.
翻译:我们研究了在二进制分类设置中带有单一隐藏层的单象子ReLU神经网络的梯度流的动态和隐含偏差。我们表明,当标签由带有美元神经元的目标网络的标志确定时,在网络初始化和数据集抽样的概率很高的情况下,GF朝着方向(适当界定)集中到一个培训精度达到完美且最多为$\mathcal{O}(r)美元线性区域的网络,意味着一个普遍性约束。与文献中的许多其他结果不同,在对数据分布的额外假设下,我们的结果甚至维持着轻微的超分度,其宽度为$\tilde_mathcal{O}(r)美元,且独立于样本大小。