The success of neural networks (NNs) in a wide range of applications has led to increased interest in understanding the underlying learning dynamics of these models. In this paper, we go beyond mere descriptions of the learning dynamics by taking a graph perspective and investigating the relationship between the graph structure of NNs and their performance. Specifically, we propose (1) representing the neural network learning process as a time-evolving graph (i.e., a series of static graph snapshots over epochs), (2) capturing the structural changes of the NN during the training phase in a simple temporal summary, and (3) leveraging the structural summary to predict the accuracy of the underlying NN in a classification or regression task. For the dynamic graph representation of NNs, we explore structural representations for fully-connected and convolutional layers, which are key components of powerful NN models. Our analysis shows that a simple summary of graph statistics, such as weighted degree and eigenvector centrality, over just a few epochs can be used to accurately predict the performance of NNs. For example, a weighted degree-based summary of the time-evolving graph that is constructed based on 5 training epochs of the LeNet architecture achieves classification accuracy of over 93%. Our findings are consistent for different NN architectures, including LeNet, VGG, AlexNet and ResNet.
翻译:神经网络(NNs)在广泛的应用中的成功导致人们更加关注理解这些模型的基本学习动态。在本文中,我们不仅仅通过用图表视角描述学习动态,还调查NNs图表结构及其性能之间的关系。具体地说,我们提议(1) 将神经网络学习过程作为时间变化的图表(即一系列超越时代的静态图形快照)来代表神经网络学习过程,(2) 在一个简单的时间摘要中捕捉NNS在培训阶段的结构变化,(3) 利用结构摘要来预测分类或回归任务中NNS基础的准确性。关于NNS动态图表的表示,我们探索完全连接和动态层的结构表述,这是强大的NNN模型的关键组成部分。我们的分析表明,仅用一个简单的图表汇总,如加权度和精度中心点,可以用来精确预测NNPs在培训阶段的性能。 例如,基于时间动态图的加权度摘要,基于我们网络网络的精确度图, 包括基于五级的Le-NGS 系统结构, 的精确性结构, 建于五级的Le-NGS 的精确性结构。