We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR). Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism. This allows the specialisation of a shared INR network to each data item through subnetwork selection. After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression. Variational Compression of Implicit Neural Representations (VC-INR) shows improved performance given the same representational capacity pre quantisation while also outperforming previous quantisation schemes used for other INR techniques. Our experiments demonstrate strong results over a large set of diverse modalities using the same algorithm without any modality-specific inductive biases. We show results on images, climate data, 3D shapes and scenes as well as audio and video, introducing VC-INR as the first INR-based method to outperform codecs as well-known and diverse as JPEG 2000, MP3 and AVC/HEVC on their respective modalities.
翻译:我们引入了一种基于数据函数化的、推广性的神经压缩算法,该算法被参数化为一个隐式神经表示(INR)。通过桥接潜在编码和稀疏性之间的差距,我们得到了一个将非线性映射为软门控机制的紧凑潜在表示。这允许通过子网络选择将共享的INR网络专门化为每个数据项。在获得这种潜在表示的数据集之后,我们在推广性的空间中直接优化速率/失真的折衷关系,使用神经压缩。隐式神经表示的变分压缩(VC-INR)在具有相同表示容量的情况下表现出更好的性能,同时也优于用于其他INR技术的先前量化方案。我们的实验展示了在同一算法下对不同的模态展示了强大的结果,而没有任何模态特异性的归纳偏差。我们展示了图像,气候数据,3D形状和场景,以及音频和视频的全面的结果,将VC-INR作为第一个基于INR的方法,在各自的模态上优于诸如JPEG 2000、MP3和AVC/HEVC等公认和多样化的编解码器。