We show that the representation cost of fully connected neural networks with homogeneous nonlinearities - which describes the implicit bias in function space of networks with $L_2$-regularization or with losses such as the cross-entropy - converges as the depth of the network goes to infinity to a notion of rank over nonlinear functions. We then inquire under which conditions the global minima of the loss recover the `true' rank of the data: we show that for too large depths the global minimum will be approximately rank 1 (underestimating the rank); we then argue that there is a range of depths which grows with the number of datapoints where the true rank is recovered. Finally, we discuss the effect of the rank of a classifier on the topology of the resulting class boundaries and show that autoencoders with optimal nonlinear rank are naturally denoising.
翻译:我们发现,完全相连的神经网络与同质非线性无线性完全连接的神经网络的表示成本——它描述了以美元为2美元的正规化或有跨物种类损失的网络在功能空间中的隐含偏差——随着网络深度达到无限程度,形成非线性函数之上的等级概念。我们然后调查在何种条件下,全球损失的微量恢复了数据“真正的”等级:我们表明,如果深度过大,全球最低值大约为1级(低估等级);然后我们争辩说,随着真实等级恢复的数据点数目的增加,有一系列深度。最后,我们讨论了分类者等级对由此产生的等级边界的表层的影响,并表明具有最佳非线性等级的自动分解是自然的。