Lossless image compression is an important technique for image storage and transmission when information loss is not allowed. With the fast development of deep learning techniques, deep neural networks have been used in this field to achieve a higher compression rate. Methods based on pixel-wise autoregressive statistical models have shown good performance. However, the sequential processing way prevents these methods to be used in practice. Recently, multi-scale autoregressive models have been proposed to address this limitation. Multi-scale approaches can use parallel computing systems efficiently and build practical systems. Nevertheless, these approaches sacrifice compression performance in exchange for speed. In this paper, we propose a multi-scale progressive statistical model that takes advantage of the pixel-wise approach and the multi-scale approach. We developed a flexible mechanism where the processing order of the pixels can be adjusted easily. Our proposed method outperforms the state-of-the-art lossless image compression methods on two large benchmark datasets by a significant margin without degrading the inference speed dramatically.
翻译:在不允许信息丢失的情况下,无损图像压缩是图像存储和传输的重要技术。随着深层学习技术的快速发展,深神经网络已用于这一领域,以达到更高的压缩率。基于像素自动递减统计模型的方法表现良好。然而,顺序处理方法使这些方法无法在实践中使用。最近,提出了多尺度自动递减模型来解决这一限制问题。多尺度方法可以高效地使用平行计算机系统并建设实用系统。然而,这些方法可以牺牲压缩性能以换取速度。在本文件中,我们提出了一个利用像素方法及多尺度方法的多尺度渐进统计模型。我们开发了一个灵活的机制,可以很容易地调整像素的处理顺序。我们提出的方法在两个大基准数据集上超越了最先进的无损图像压缩方法,而不会大幅降低推断速度。