Entropy modeling is a key component for high-performance image compression algorithms. Recent developments in autoregressive context modeling helped learning-based methods to surpass their classical counterparts. However, the performance of those models can be further improved due to the underexploited spatio-channel dependencies in latent space, and the suboptimal implementation of context adaptivity. Inspired by the adaptive characteristics of the transformers, we propose a transformer-based context model, named Contextformer, which generalizes the de facto standard attention mechanism to spatio-channel attention. We replace the context model of a modern compression framework with the Contextformer and test it on the widely used Kodak, CLIC2020, and Tecnick image datasets. Our experimental results show that the proposed model provides up to 11% rate savings compared to the standard Versatile Video Coding (VVC) Test Model (VTM) 16.2, and outperforms various learning-based models in terms of PSNR and MS-SSIM.
翻译:内建模型是高性能图像压缩算法的一个关键组成部分。自动递减背景模型的最近发展有助于学习方法超越传统的对等方法。然而,由于潜空的孔道依赖性开发不足,以及环境适应性实施不理想,这些模型的性能可以进一步改进。受变压器适应性特点的启发,我们提出了一个变压器背景模型,名为“背景变压器”,将事实上的标准关注机制笼统地概括到跳道关注上。我们用上下文模型取代了现代压缩框架的背景模型,对广泛使用的Kodak、CLIC2020和Tecnick图像数据集进行了测试。我们的实验结果表明,与标准的Versatile视频 Coding测试模型(VTM)16.2相比,拟议的模型节省了11%的速率,并在PSNR和MS-SSIM中超越了各种基于学习的模型。