Recent neural compression methods have been based on the popular hyperprior framework. It relies on Scalar Quantization and offers a very strong compression performance. This contrasts from recent advances in image generation and representation learning, where Vector Quantization is more commonly employed. In this work, we attempt to bring these lines of research closer by revisiting vector quantization for image compression. We build upon the VQ-VAE framework and introduce several modifications. First, we replace the vanilla vector quantizer by a product quantizer. This intermediate solution between vector and scalar quantization allows for a much wider set of rate-distortion points: It implicitly defines high-quality quantizers that would otherwise require intractably large codebooks. Second, inspired by the success of Masked Image Modeling (MIM) in the context of self-supervised learning and generative image models, we propose a novel conditional entropy model which improves entropy coding by modelling the co-dependencies of the quantized latent codes. The resulting PQ-MIM model is surprisingly effective: its compression performance on par with recent hyperprior methods. It also outperforms HiFiC in terms of FID and KID metrics when optimized with perceptual losses (e.g. adversarial). Finally, since PQ-MIM is compatible with image generation frameworks, we show qualitatively that it can operate under a hybrid mode between compression and generation, with no further training or finetuning. As a result, we explore the extreme compression regime where an image is compressed into 200 bytes, i.e., less than a tweet.
翻译:最近的神经压缩方法基于流行的超优框架。 它依赖 Scalar 量度和 calar 量度之间的中间解决方案, 并提供了非常强烈的压缩性能。 这与最近图像生成和代表学习的进展形成相形之下, 通常使用矢量量化法。 在这项工作中, 我们试图通过重新审视矢量压缩图像的矢量量化法, 使这些研究线更加接近。 我们以 VQ- VAEE 框架为基础, 并引入一些修改。 首先, 我们用产品量化器取代了香草矢量矢量定量。 这个介质和定量定量定量量化之间的中间解决方案, 允许建立一套更广泛的比率扭曲点 : 它隐含地定义了高质量量化量化和代表的量化, 否则需要大量代码。 其次, 在自我监督的学习和基因缩放图像模型成功的情况下, 我们提出了一个新的、 有条件的昆虫模型, 通过模拟四重度潜值代码之间的联合调试调, 由此生成的精度- Q- MIM 模型将高品质量化的图像测试结果 。