With the development of deep learning techniques, the combination of deep learning with image compression has drawn lots of attention. Recently, learned image compression methods had exceeded their classical counterparts in terms of rate-distortion performance. However, continuous rate adaptation remains an open question. Some learned image compression methods use multiple networks for multiple rates, while others use one single model at the expense of computational complexity increase and performance degradation. In this paper, we propose a continuously rate adjustable learned image compression framework, Asymmetric Gained Variational Autoencoder (AG-VAE). AG-VAE utilizes a pair of gain units to achieve discrete rate adaptation in one single model with a negligible additional computation. Then, by using exponential interpolation, continuous rate adaptation is achieved without compromising performance. Besides, we propose the asymmetric Gaussian entropy model for more accurate entropy estimation. Exhaustive experiments show that our method achieves comparable quantitative performance with SOTA learned image compression methods and better qualitative performance than classical image codecs. In the ablation study, we confirm the usefulness and superiority of gain units and the asymmetric Gaussian entropy model.
翻译:随着深层学习技术的开发,深层学习与图像压缩相结合引起了人们的极大关注。最近,在率扭曲性能方面,学习到的图像压缩方法已经超越了传统方法的经典方法。然而,持续率调整仍然是一个未决问题。一些学习到的图像压缩方法使用多重率的多个网络,而另一些则使用单一模型,而牺牲计算复杂性的增加和性能退化。在本文中,我们建议一个持续率调整的可调整的学习图像压缩框架,Asymitimation 增益自动编码器(AG-VAE) 。AG-VAE 使用一对增益单位在单一模型中实现离散率调整,再用一个微小的计算。然后,通过使用指数内推法,连续率调整可以实现,而不会损害性能。此外,我们提议采用不对称高斯的英式模型来进行更准确的变温估计。发光实验显示,我们的方法与SOTA学习的图像压缩方法和比古典图像编码更好的质量性能。在一项研究中,我们确认获得单位和不对称高斯安基质模型的实用性和优越性。