We show that the representation cost of fully connected neural networks with homogeneous nonlinearities - which describes the implicit bias in function space of networks with $L_2$-regularization or with losses such as the cross-entropy - converges as the depth of the network goes to infinity to a notion of rank over nonlinear functions. We then inquire under which conditions the global minima of the loss recover the `true' rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising.
翻译:我们展示了使用相同非线性函数的全连接神经网络的表示成本——它描述了带有$L_2$规则化或交叉熵等损失的网络在函数空间中的隐式偏差——当深度无限增加时收敛到非线性函数排名的概念。然后我们探讨了在什么条件下损失的全局最小值可以恢复出数据的“真实”排名: 我们展示了对于深度过大,全局最小值会近似于排名1(低估排名); 然后我们证明了有一定深度范围,随着数据点数增加,可以恢复出真实排名。最后,我们讨论了分类器的排名对结果类边界的拓扑结构的影响,并展示了具有最优非线性排名的自编码器是自然去噪声的。