It has been witnessed that learned image compression has outperformed conventional image coding techniques and tends to be practical in industrial applications. One of the most critical issues that need to be considered is the non-deterministic calculation, which makes the probability prediction cross-platform inconsistent and frustrates successful decoding. We propose to solve this problem by introducing well-developed post-training quantization and making the model inference integer-arithmetic-only, which is much simpler than presently existing training and fine-tuning based approaches yet still keeps the superior rate-distortion performance of learned image compression. Based on that, we further improve the discretization of the entropy parameters and extend the deterministic inference to fit Gaussian mixture models. With our proposed methods, the current state-of-the-art image compression models can infer in a cross-platform consistent manner, which makes the further development and practice of learned image compression more promising.
翻译:人们已经看到,所学的图像压缩已经超过了常规图像编码技术,而且往往在工业应用中是实用的。最需要考虑的关键问题之一是非确定性计算,这使得概率预测跨平台不一致,使解码失败。我们提议通过采用完善的训练后量化,使模型推算全成像-全成像-全成像-全成像-全成像-全成象-全成象-全成象-全成像比目前现有的培训和微调方法简单得多,但仍然保持了所学成像压缩的超优率-扭曲性能。在此基础上,我们进一步改进了昆虫参数的离散性,并将确定性推论扩展至适合高斯混合模型。根据我们提议的方法,目前最先进的图像压缩模型可以以跨成样的方式推导出这个问题,从而使所学成象图像压缩的进一步发展和实践更加有希望。