Why do deep neural networks (DNNs) benefit from very high dimensional parameter spaces? Their huge parameter complexities vs stunning performances in practice is all the more intriguing and not explainable using the standard theory of regular models. In this work, we propose a geometrically flavored information-theoretic approach to study this phenomenon. Namely, we introduce the locally varying dimensionality of the parameter space of neural network models by considering the number of significant dimensions of the Fisher information matrix, and model the parameter space as a manifold using the framework of singular semi-Riemannian geometry. We derive model complexity measures which yield short description lengths for deep neural network models based on their singularity analysis thus explaining the good performance of DNNs despite their large number of parameters.
翻译:深神经网络(DNNs)为何能从非常高维参数空间中受益?它们的巨大参数复杂性与实际惊人的性能相比,更令人着迷,用常规模型的标准理论无法解释。在这项工作中,我们建议了一种几何式的信息-理论方法来研究这一现象。也就是说,我们通过考虑Fisher信息矩阵的重要维度数量来引入神经网络模型参数空间的本地不同维度,并将参数空间模型用单数半里曼尼几何学框架作为元件。我们得出模型复杂性计量方法,以其独特性分析为基础,为深神经网络模型产生短描述长度,从而解释尽管DNNs具有大量参数,但其良好的性能。