Recently deep learning-based image compression methods have achieved significant achievements and gradually outperformed traditional approaches including the latest standard Versatile Video Coding (VVC) in both PSNR and MS-SSIM metrics. Two key components of learned image compression frameworks are the entropy model of the latent representations and the encoding/decoding network architectures. Various models have been proposed, such as autoregressive, softmax, logistic mixture, Gaussian mixture, and Laplacian. Existing schemes only use one of these models. However, due to the vast diversity of images, it is not optimal to use one model for all images, even different regions of one image. In this paper, we propose a more flexible discretized Gaussian-Laplacian-Logistic mixture model (GLLMM) for the latent representations, which can adapt to different contents in different images and different regions of one image more accurately. Besides, in the encoding/decoding network design part, we propose a concatenated residual blocks (CRB), where multiple residual blocks are serially connected with additional shortcut connections. The CRB can improve the learning ability of the network, which can further improve the compression performance. Experimental results using the Kodak and Tecnick datasets show that the proposed scheme outperforms all the state-of-the-art learning-based methods and existing compression standards including VVC intra coding (4:4:4 and 4:2:0) in terms of the PSNR and MS-SSIM.
翻译:最近深入学习的图像压缩方法取得了显著成就,并逐渐超越了传统方法,包括PSNR和MS-SSIM衡量标准的最新标准 Versatile Video Coding (VVC) 。 学习的图像压缩框架的两个关键组成部分是潜表层和编码/编码网络结构的酶模型。 提出了各种模型, 如自动递进、软式、 后勤混合、 高斯混合和 Laplacian 等。 现有方案只使用其中的一种模型。 但是,由于图像种类繁多, 对所有图像, 甚至一个图像的不同区域使用一个模型是不理想的。 在本文件中, 我们提议一个更灵活的离散化的高斯- 拉普丽西亚- 洛吉亚混合模型(GLLLMM) 用于潜影层显示模型, 它可以适应不同图像和不同图像区域的不同内容 。 此外, 在编码/ 解析网络设计部分, 我们提议一个配置的残余区块( CRB), 在那里多个残块与更多的捷路连接, 甚至一个图像区域。 在本文件中, 我们建议一个更灵活地使用S-ral-ral 4 系统 系统 学习所有的模型。 CRal- sal- sal- sal- 系统, 可以改进现有的的系统, 学习方法。 Cal- sal- salmailal- sal- sal- sal- sal- sald- salmal- sal- sal- sal- sal- sal