We introduce the new concept of computation coding. Similar to how rate-distortion theory is concerned with the lossy compression of data, computation coding deals with the lossy computation of functions. Particularizing to linear functions, we present an algorithm to reduce the computational cost of multiplying an arbitrary given matrix with an unknown column vector. The algorithm decomposes the given matrix into the product of codebook wiring matrices whose entries are either zero or signed integer powers of two. For a typical implementation of deep neural networks, the proposed algorithm reduces the number of required addition units several times. To achieve the accuracy of 16-bit signed integer arithmetic for 4k-vectors, no multipliers and only 1.5 adders per matrix entry are needed.
翻译:我们引入了计算编码的新概念。 与速度扭曲理论如何关注数据损失压缩问题相似, 计算编码处理功能损失计算。 具体到线性函数, 我们提出了一个算法, 以减少任意将一个任意给定矩阵乘以未知的柱矢量的计算成本。 算法将给定矩阵分解成代码簿连接矩阵的产物, 该矩阵的条目为零, 或为二的签名整数功率。 典型的深神经网络实施时, 拟议的算法会减少所需增加单位数的数次。 要达到4k 矢量的16位签名的整数算法的准确性, 不需要乘数, 每个矩阵条目只需要1.5个加数。