Advances in Implicit Neural Representations (INR) have motivated research on domain-agnostic compression techniques. These methods train a neural network to approximate an object, and then store the weights of the trained model. For example, given an image, a network is trained to learn the mapping from pixel locations to RGB values. In this paper, we propose L$_0$onie, a sparsity-constrained extension of the COIN compression method. Sparsity allows to leverage the faster learning of overparameterized networks, while retaining the desirable compression rate of smaller models. Moreover, our constrained formulation ensures that the final model respects a pre-determined compression rate, dispensing of the need for expensive architecture search.
翻译:隐形神经图示的进步激发了对域名压缩技术的研究。 这些方法训练神经网络以接近一个物体,然后储存经过训练的模型的重量。 例如,在图像中,一个网络受过培训,从像素位置到RGB值学习绘图。 在本文中,我们建议使用CONIN压缩方法的宽度限制扩展L$_0$onie。 光度允许利用超分化网络的更快学习,同时保留较小模型的适当压缩率。 此外,我们的限制配方确保最终模型尊重预先确定的压缩率,免除昂贵建筑搜索的需要。