While recently many designs have been proposed to improve the model efficiency of convolutional neural networks (CNNs) on a fixed resource budget, theoretical understanding of these designs is still conspicuously lacking. This paper aims to provide a new framework for answering the question: Is there still any remaining model redundancy in a compressed CNN? We begin by developing a general statistical formulation of CNNs and compressed CNNs via the tensor decomposition, such that the weights across layers can be summarized into a single tensor. Then, through a rigorous sample complexity analysis, we reveal an important discrepancy between the derived sample complexity and the naive parameter counting, which serves as a direct indicator of the model redundancy. Motivated by this finding, we introduce a new model redundancy measure for compressed CNNs, called the $K/R$ ratio, which further allows for nonlinear activations. The usefulness of this new measure is supported by ablation studies on popular block designs and datasets.
翻译:虽然最近提出了许多设计,以提高固定资源预算的进化神经网络(CNNs)模型效率,但对这些设计的理论理解仍然明显缺乏。本文件旨在提供一个新的框架,回答以下问题:压缩CNN中还存在任何剩余模式冗余吗?我们首先通过高压分解开发CNN和压缩CNN的一般统计公式,这样,跨层的重量就可以被归纳成单一的粒子。然后,通过严格的抽样复杂性分析,我们发现衍生样本复杂性与天真参数的计算之间存在重大差异,而后者是模型冗余的直接指标。受这一发现的影响,我们为压缩CNN引入了一种新的模式冗余措施,称为$/R$比率,进一步允许非线性激活。这一新措施的有用性得到了流行区块设计和数据集的调整研究的支持。