We propose a complexity measure of a neural network mapping function based on the diversity of the set of tangent spaces from different inputs. Treating each tangent space as a linear PAC concept we use an entropy-based measure of the bundle of concepts in order to estimate the conceptual capacity of the network. The theoretical maximal capacity of a ReLU network is equivalent to the number of its neurons. In practice however, due to correlations between neuron activities within the network, the actual capacity can be remarkably small, even for very big networks. Empirical evaluations show that this new measure is correlated with the complexity of the mapping function and thus the generalisation capabilities of the corresponding network. It captures the effective, as oppose to the theoretical, complexity of the network function. We also showcase some uses of the proposed measure for analysis and comparison of trained neural network models.
翻译:我们根据来自不同投入的相切空间的多样性提出神经网络绘图功能的复杂度量。 将每个相近空间作为线性 PAC 概念处理, 我们使用对一系列概念的星基测量, 以估计网络的概念能力。 RELU 网络的理论最大能力相当于其神经元数量。 然而, 在实践中, 由于网络内神经活动之间的相互关系, 实际能力可能非常小, 即使是非常大的网络也是如此。 经验性评估表明, 这一新措施与绘图功能的复杂性相关, 从而也与相应网络的通用能力相关。 它抓住了与网络功能的理论复杂性相悖的有效手段。 我们还展示了对经过培训的神经网络模型进行分析和比较的拟议措施的一些用途。