Deep neural networks (DNNs) defy the classical bias-variance trade-off: adding parameters to a DNN that interpolates its training data will typically improve its generalization performance. Explaining the mechanism behind this ``benign overfitting'' in deep networks remains an outstanding challenge. Here, we study the last hidden layer representations of various state-of-the-art convolutional neural networks and find that if the last hidden representation is wide enough, its neurons tend to split into groups that carry identical information, and differ from each other only by statistically independent noise. The number of such groups increases linearly with the width of the layer, but only if the width is above a critical value. We show that redundant neurons appear only when the training process reaches interpolation and the training error is zero.
翻译:深神经网络(DNNs)无视经典的偏差偏差取舍:为DNN增加参数,将培训数据内插到DNN中,通常会改善其一般性能。解释深网络中“优于”的深网络背后的机制仍是一个突出的挑战。在这里,我们研究各种最先进的共和神经网络最后隐藏的层表层,发现如果最后隐藏的表层足够宽,它的神经元往往会分裂成一些群体,它们含有相同的信息,而且只有统计上独立的噪音才彼此不同。这类群体的数量随着层层宽度的线性增加,但只有在宽度超过临界值时才出现。我们显示,只有在培训过程达到内插和训练错误为零时,才会出现多余的神经元。