Learning-based methods have effectively promoted the community of image compression. Meanwhile, variational autoencoder (VAE) based variable-rate approaches have recently gained much attention to avoid the usage of a set of different networks for various compression rates. Despite the remarkable performance that has been achieved, these approaches would be readily corrupted once multiple compression/decompression operations are executed, resulting in the fact that image quality would be tremendously dropped and strong artifacts would appear. Thus, we try to tackle the issue of high-fidelity fine variable-rate image compression and propose the Invertible Activation Transformation (IAT) module. We implement the IAT in a mathematical invertible manner on a single rate Invertible Neural Network (INN) based model and the quality level (QLevel) would be fed into the IAT to generate scaling and bias tensors. IAT and QLevel together give the image compression model the ability of fine variable-rate control while better maintaining the image fidelity. Extensive experiments demonstrate that the single rate image compression model equipped with our IAT module has the ability to achieve variable-rate control without any compromise. And our IAT-embedded model obtains comparable rate-distortion performance with recent learning-based image compression methods. Furthermore, our method outperforms the state-of-the-art variable-rate image compression method by a large margin, especially after multiple re-encodings.
翻译:以学习为基础的方法有效地促进了图像压缩圈。 与此同时,基于变异自动读数器(VAE)的可变率方法最近引起了人们的极大关注,以避免使用一套不同压缩率的不同网络。尽管已经取得了显著的绩效,但一旦执行多个压缩/减压操作,这些方法将很容易腐蚀,从而导致图像质量将大幅下降,而强大的人工制品将出现。因此,我们试图解决高纤维细微可变率图像压缩问题,并提出不可逆的激活转换模块。我们以数学不可逆的方式在单一速的不可逆神经网络(INN)基础上的模型上实施IAT,而质量水平(QQIAT)将被纳入IAT,以产生缩放和偏差加压。IAT和QQalge 一起使图像压缩模型具有精细的可变率控制能力,同时更好地维护图像的准确性。广泛的实验表明,装配有我们IAT模块的单一比率图像压缩模版模型(IAT)模块有能力在不作任何妥协的情况下实现可变率控制。我们以甚易变的大幅的压率方法(IAT)在升级后获取了我们的最新压制的大幅的图像。