Learning-based image compression has improved to a level where it can outperform traditional image codecs such as HEVC and VVC in terms of coding performance. In addition to good compression performance, device interoperability is essential for a compression codec to be deployed, i.e., encoding and decoding on different CPUs or GPUs should be error-free and with negligible performance reduction. In this paper, we present a method to solve the device interoperability problem of a state-of-the-art image compression network. We implement quantization to entropy networks which output entropy parameters. We suggest a simple method which can ensure cross-platform encoding and decoding, and can be implemented quickly with minor performance deviation, of 0.3% BD-rate, from floating point model results.
翻译:基于学习的图像压缩已经改进到一个水平,它能够超过传统图像编码器,如HEVC和VVC的编码性能。除了良好的压缩性能外,设备互操作性对于部署压缩编码器至关重要,即不同CPU或GPU的编码和解码应无误且性能减低可忽略不计。在本文中,我们提出了一个解决设备互操作性问题的方法,即一个最先进的图像压缩网络。我们用定量法对导出酶参数的加密网络进行计算。我们建议一种简单的方法,可以确保交叉平台编码和解码,并且可以在小的性能偏差下迅速实施,即从浮点模型结果看0.3% BD-速率。