Neural compression algorithms are typically based on autoencoders that require specialized encoder and decoder architectures for different data modalities. In this paper, we propose COIN++, a neural compression framework that seamlessly handles a wide range of data modalities. Our approach is based on converting data to implicit neural representations, i.e. neural functions that map coordinates (such as pixel locations) to features (such as RGB values). Then, instead of storing the weights of the implicit neural representation directly, we store modulations applied to a meta-learned base network as a compressed code for the data. We further quantize and entropy code these modulations, leading to large compression gains while reducing encoding time by two orders of magnitude compared to baselines. We empirically demonstrate the effectiveness of our method by compressing various data modalities, from images and audio to medical and climate data.
翻译:神经压缩算法通常以需要不同数据模式的专用编码器和解码器结构的自动编码器为基础。 在本文中,我们提议CONIN+++,这是一个无缝处理广泛数据模式的神经压缩框架。我们的方法是将数据转换成隐含神经表象,即将神经功能(例如像素位置)映射成特征(例如RGB值),而不是直接储存隐含神经表示的重量。然后,我们将调制器储存到一个元学基础网络中,作为数据压缩代码。我们进一步量化并编码这些调制器,导致大量压缩收益,同时将编码时间减少两个数量级与基线相比。我们通过将各种数据模式从图像和音频到医学和气候数据压缩在一起,实证地展示了我们的方法的有效性。