We study the implicit regularization effects of deep learning in tensor factorization. While implicit regularization in deep matrix and 'shallow' tensor factorization via linear and certain type of non-linear neural networks promotes low-rank solutions with at most quadratic growth, we show that its effect in deep tensor factorization grows polynomially with the depth of the network. This provides a remarkably faithful description of the observed experimental behaviour. Using numerical experiments, we demonstrate the benefits of this implicit regularization in yielding a more accurate estimation and better convergence properties.
翻译:我们研究了深层学习在强因子化中的隐含正规化效应。虽然深层矩阵中的隐性正规化和通过线性和某些类型的非线性神经网络的“show”的“show”极端因子化的隐含正规化通过线性和非线性神经网络的某种类型的神经网络促进低层次的解决方案,但我们可以发现,在深度的多元因子化中,其效应随着网络的深度而成。这非常忠实地描述了观察到的实验行为。使用数字实验,我们展示了这种隐性正规化在得出更准确的估算和更好的趋同特性方面的好处。