Recent deep learning models outperform standard lossy image compression codecs. However, applying these models on a patch-by-patch basis requires that each image patch be encoded and decoded independently. The influence from adjacent patches is therefore lost, leading to block artefacts at low bitrates. We propose the Binary Inpainting Network (BINet), an autoencoder framework which incorporates binary inpainting to reinstate interdependencies between adjacent patches, for improved patch-based compression of still images. When decoding a patch, BINet additionally uses the binarised encodings from surrounding patches to guide its reconstruction. In contrast to sequential inpainting methods where patches are decoded based on previons reconstructions, BINet operates directly on the binary codes of surrounding patches without access to the original or reconstructed image data. Encoding and decoding can therefore be performed in parallel. We demonstrate that BINet improves the compression quality of a competitive deep image codec across a range of compression levels.
翻译:最近深层学习模型超过了标准损耗图像压缩编码。 但是, 在补丁基础上应用这些模型需要将每个图像补丁独立编码和解码。 因此, 相邻补丁的影响已经丧失, 导致以低位速制成块状工艺品。 我们提议 Binary Inpainting 网络(BINet), 这是一种自动编码框架, 包含二进制图, 以恢复相邻补丁间相互依存关系, 以便改进对静态图像的补丁压缩。 在解码时, BINet 额外使用周围补丁的二进制编码来指导其重建。 相比之下, BINet 与根据预位值重建进行补丁解码的相继补丁方法相反, BINet 直接使用周围补丁的二进制码进行操作, 无法访问原始或重建的图像数据。 因此, 可以同时进行编码和解码工作。 我们证明 BINet 提高具有竞争力的深层图像编码的压缩质量, 跨越一系列压缩等级。