We consider the problem of learned transform compression where we learn both, the transform as well as the probability distribution over the discrete codes. We utilize a soft relaxation of the quantization operation to allow for back-propagation of gradients and employ vector (rather than scalar) quantization of the latent codes. Furthermore, we apply similar relaxation in the code probability assignments enabling direct optimization of the code entropy. To the best of our knowledge, this approach is completely novel. We conduct a set of proof-of concept experiments confirming the potency of our approaches.
翻译:我们考虑的是当我们既了解离散代码的变换和概率分布时,学习变换压缩的问题。我们利用量化操作的软放松来允许梯度反向调整,并采用潜值代码的矢量(而不是卡路里)量化。此外,我们在代码概率分配中也采用类似的放松,以便直接优化编码的加密。根据我们的知识,这种方法是全新的。我们进行了一系列概念验证实验,以证实我们的方法是否有效。