The high dimensionality of images presents architecture and sampling-efficiency challenges for likelihood-based generative models. Previous approaches such as VQ-VAE use deep autoencoders to obtain compact representations, which are more practical as inputs for likelihood-based models. We present an alternative approach, inspired by common image compression methods like JPEG, and convert images to quantized discrete cosine transform (DCT) blocks, which are represented sparsely as a sequence of DCT channel, spatial location, and DCT coefficient triples. We propose a Transformer-based autoregressive architecture, which is trained to sequentially predict the conditional distribution of the next element in such sequences, and which scales effectively to high resolution images. On a range of image datasets, we demonstrate that our approach can generate high quality, diverse images, with sample metric scores competitive with state of the art methods. We additionally show that simple modifications to our method yield effective image colorization and super-resolution models.
翻译:图像的高度维度为基于概率的基因变异模型提供了结构性和取样效率挑战。 VQ-VAE 等以往方法使用深度自动校正器来获取压缩表达式,作为基于概率模型的投入更为实用。我们展示了另一种方法,在通用图像压缩方法(如JPEG)的启发下,将图像转换成量化的离散连线变异(DCT)区块,这些区块作为DCT通道、空间位置和DCT系数三倍的序列代表很少。我们提议了一种基于变异器的自动递增结构,该结构经过培训,可以按顺序预测下一个元素在这类序列中的有条件分布,并且能够有效地测量到高分辨率图像。在一系列图像数据集中,我们展示了我们的方法能够产生高质量、多样性的图像,并具有与艺术方法状态相比的抽样指标分数。我们还展示了我们的方法的简单修改能够产生有效的图像色化和超分辨率模型。