Pretraining language models with next-token prediction on massive text corpora has delivered phenomenal zero-shot, few-shot, transfer learning and multi-tasking capabilities on both generative and discriminative language tasks. Motivated by this success, we explore a Vector-quantized Image Modeling (VIM) approach that involves pretraining a Transformer to predict rasterized image tokens autoregressively. The discrete image tokens are encoded from a learned Vision-Transformer-based VQGAN (ViT-VQGAN). We first propose multiple improvements over vanilla VQGAN from architecture to codebook learning, yielding better efficiency and reconstruction fidelity. The improved ViT-VQGAN further improves vector-quantized image modeling tasks, including unconditional, class-conditioned image generation and unsupervised representation learning. When trained on ImageNet at 256x256 resolution, we achieve Inception Score (IS) of 175.1 and Fr'echet Inception Distance (FID) of 4.17, a dramatic improvement over the vanilla VQGAN, which obtains 70.6 and 17.04 for IS and FID, respectively. Based on ViT-VQGAN and unsupervised pretraining, we further evaluate the pretrained Transformer by averaging intermediate features, similar to Image GPT (iGPT). This ImageNet-pretrained VIM-L significantly beats iGPT-L on linear-probe accuracy from 60.3% to 72.2% for a similar model size. ViM-L also outperforms iGPT-XL which is trained with extra web image data and larger model size.
翻译:对大规模文本公司进行下端预测的预选语言模型已经产生了惊人的零射、短射、传输学习和多任务能力,这在基因化和歧视性语言任务方面都带来了惊人的零射、短射、传输学习和多任务能力。由于取得了这一成功,我们探索了一种矢量定量图像模型(VIM)方法,其中包括对一个变异器进行预先培训,以预测光化图像符号自动递增。离散图像符号是从一个已学的Vivis-Tradefer beaty VQGAN(VT-VQGAN)(ViL-VQGAN)(Viilla VQGGAN(VIL)模型)到4.17(FIDS-Trading VQGL)中间级图像(FID)的多次改进, ViGGG-VAR(VG) 模型生成前不限量级图像模型(IGG)的模型和VIG(IG-S-S-S-IL) 等级图像(VG-S-IG-IG-S-IG-S-IL)的模型和S-IG-S-S-S-ID-S-S-S-IG-IL)升级的70.6和17.6-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-IL 和S-IG-S-S-S-IL-IG-IL 和S-IL-IG-ID-IL-IL-S-IL-ID-ID-ID-S-ID-IL-I-I-I-I-I-I-I-IL-IL-IL-IL-I-I-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-IL-