Convolutional autoencoders are now at the forefront of image compression research. To improve their entropy coding, encoder output is typically analyzed with a second autoencoder to generate per-variable parametrized prior probability distributions. We instead propose a compression scheme that uses a single convolutional autoencoder and multiple learned prior distributions working as a competition of experts. Trained prior distributions are stored in a static table of cumulative distribution functions. During inference, this table is used by an entropy coder as a look-up-table to determine the best prior for each spatial location. Our method offers rate-distortion performance comparable to that obtained with a predicted parametrized prior with only a fraction of its entropy coding and decoding complexity.
翻译:革命性自动编码器目前处于图像压缩研究的最前沿。 为了改进编码, 通常会用第二个自动编码器分析编码器输出结果, 以生成每可变的先前概率分布。 我们相反地提出一个压缩方案, 使用单一的革命性自动编码器和多学前分布器作为专家的竞争。 经过培训的先前分布器存储在累积分布功能的静态表格中。 在推断中, 由一个加密编码器作为查找表, 用来确定每个空间位置的最佳前程 。 我们的方法提供率扭曲性能, 与预测的参数之前的精确度相似, 其加密编码和解码复杂性只有一小部分 。