We propose Stratified Image Transformer(StraIT), a pure non-autoregressive(NAR) generative model that demonstrates superiority in high-quality image synthesis over existing autoregressive(AR) and diffusion models(DMs). In contrast to the under-exploitation of visual characteristics in existing vision tokenizer, we leverage the hierarchical nature of images to encode visual tokens into stratified levels with emergent properties. Through the proposed image stratification that obtains an interlinked token pair, we alleviate the modeling difficulty and lift the generative power of NAR models. Our experiments demonstrate that StraIT significantly improves NAR generation and out-performs existing DMs and AR methods while being order-of-magnitude faster, achieving FID scores of 3.96 at 256*256 resolution on ImageNet without leveraging any guidance in sampling or auxiliary image classifiers. When equipped with classifier-free guidance, our method achieves an FID of 3.36 and IS of 259.3. In addition, we illustrate the decoupled modeling process of StraIT generation, showing its compelling properties on applications including domain transfer.
翻译:我们提议采用纯非自动图像变异器(Strait)这一纯非非自动变异器(NAR)基因模型,该模型在高品质图像合成中优于现有自动递退模型(AR)和传播模型(DMs),与现有视觉标识器中视觉特征的利用不足相比,我们利用图像的等级性质将视觉象征编码成具有突发特性的分层级别。我们通过拟议的图像分层,获得一个相互关联的符号配对,我们减轻了模型的难度,并提升了NAR模型的发源能力。我们的实验表明,StraIT极大地改进了NAR的生成和超越了现有的DMDMs和AR方法,同时加快了其放大速度,在图像网络上实现了3.96的FID分,在256*256分辨率上实现了3.56分,而没有在取样或辅助图像分类器中利用任何指导。当我们的方法获得3.36的FID和259.3分解的IS。此外,我们举例说明了StraIT一代的模型化过程,显示了其在应用中包括域域转让的强制特性。</s>