This work focuses on reducing neural network size, which is a major driver of neural network execution time, power consumption, bandwidth, and memory footprint. A key challenge is to reduce size in a manner that can be exploited readily for efficient training and inference without the need for specialized hardware. We propose Self-Compression: a simple, general method that simultaneously achieves two goals: (1) removing redundant weights, and (2) reducing the number of bits required to represent the remaining weights. This is achieved using a generalized loss function to minimize overall network size. In our experiments we demonstrate floating point accuracy with as few as 3% of the bits and 18% of the weights remaining in the network.
翻译:这项工作的重点是减少神经网络的规模,这是神经网络执行时间、电力消耗、带宽和记忆足迹的主要驱动力。一个关键的挑战是如何减少其规模,这种规模可以很容易地用于有效的培训和推断,而不需要专门的硬件。我们建议自我压缩:一个简单、一般的方法,同时实现两个目标:(1) 去除多余的重量,(2) 减少代表剩余重量所需的比特数。这是利用一个普遍损失功能来最大限度地减少整个网络的大小。在我们的实验中,我们显示了浮点精度,只有3%的比特和18%的网络剩余重量。