Modern iterations of deep learning models contain millions (billions) of unique parameters, each represented by a b-bit number. Popular attempts at compressing neural networks (such as pruning and quantisation) have shown that many of the parameters are superfluous, which we can remove (pruning) or express with less than b-bits (quantisation) without hindering performance. Here we look to go much further in minimising the information content of networks. Rather than a channel or layer-wise encoding, we look to lossless whole-network quantisation to minimise the entropy and number of unique parameters in a network. We propose a new method, which we call Weight Fixing Networks (WFN) that we design to realise four model outcome objectives: i) very few unique weights, ii) low-entropy weight encodings, iii) unique weight values which are amenable to energy-saving versions of hardware multiplication, and iv) lossless task-performance. Some of these goals are conflicting. To best balance these conflicts, we combine a few novel (and some well-trodden) tricks; a novel regularisation term, (i, ii) a view of clustering cost as relative distance change (i, ii, iv), and a focus on whole-network re-use of weights (i, iii). Our Imagenet experiments demonstrate lossless compression using 56x fewer unique weights and a 1.9x lower weight-space entropy than SOTA quantisation approaches.
翻译:深层学习模型的现代迭代包含数以百万计(十亿)的独特参数,每个参数都以b位数表示。 压缩神经网络(如修剪和量化)的流行尝试显示,许多参数是多余的,我们可以以低于b位数(量化)的方式删除(修剪)或表达,而不会妨碍性能。 这里我们想更进一步地将网络的信息内容最小化。 我们不是用一个频道或分层编码,而是用无损的全网络量化来尽可能减少网络中的精度和若干独特参数。 我们提出了一种新的方法,我们称之为重整网络(WFN),我们设计这个方法是为了实现四个模型结果目标: (i) 很少独特的重量, (ii) 低湿度重量编码, (iii) 独特的权重值, (iv) 失重。 这些目标中有些是相互矛盾的。为了最能平衡这些冲突,我们把一些小小的(和一些精度的精度精度的精度的精度的精度的精度) 量化网络的精度化的精度 。 (i) 一个新的常规化的缩缩缩缩缩略的缩化的缩化术语, (i) 作为整个的缩略的缩略的缩略的图像的缩略的缩略的缩略的缩化的缩略的缩略的缩略的缩略的缩略的缩略的缩略的缩略的缩的缩略的缩的缩的缩的缩的缩版) 。