We present a new framework to measure the intrinsic properties of (deep) neural networks. While we focus on convolutional networks, our framework can be extrapolated to any network architecture. In particular, we evaluate two network properties, namely, capacity (related to expressivity) and compression, both of which depend only on the network structure and are independent of the training and test data. To this end, we propose two metrics: the first one, called layer complexity, captures the architectural complexity of any network layer; and, the second one, called layer intrinsic power, encodes how data is compressed along the network. The metrics are based on the concept of layer algebra, which is also introduced in this paper. This concept is based on the idea that the global properties depend on the network topology, and the leaf nodes of any neural network can be approximated using local transfer functions, thereby, allowing a simple computation of the global metrics. We also compare the properties of the state-of-the art architectures using our metrics and use the properties to analyze the classification accuracy on benchmark datasets.
翻译:我们提出了一个测量(深)神经网络内在特性的新框架。 当我们关注进化网络时, 我们的框架可以外推到任何网络结构中。 特别是, 我们评估了两个网络特性, 即能力( 与表达性有关) 和压缩, 两者都仅取决于网络结构, 并且独立于培训和测试数据。 为此, 我们提出两个衡量尺度: 第一个称为层复杂性, 捕捉任何网络层的建筑复杂性; 第二个称为层内力, 编码如何在网络中压缩数据。 衡量尺度基于层代数概念, 这也是本文中引入的。 这一概念基于一个理念, 即全球特性取决于网络表层, 以及任何神经网络的叶节点可以使用本地传输功能进行近似, 从而可以简单计算全球计量尺度。 我们还用我们的测量尺度来比较艺术结构的特性, 并使用这些特性来分析基准数据集的分类准确性 。