Image generation has been a long sought-after but challenging task, and performing the generation task in an efficient manner is similarly difficult. Often researchers attempt to create a "one size fits all" generator, where there are few differences in the parameter space for drastically different datasets. Herein, we present a new transformer-based framework, dubbed StyleNAT, targeting high-quality image generation with superior efficiency and flexibility. At the core of our model, is a carefully designed framework that partitions attention heads to capture local and global information, which is achieved through using Neighborhood Attention (NA). With different heads able to pay attention to varying receptive fields, the model is able to better combine this information, and adapt, in a highly flexible manner, to the data at hand. StyleNAT attains a new SOTA FID score on FFHQ-256 with 2.046, beating prior arts with convolutional models such as StyleGAN-XL and transformers such as HIT and StyleSwin, and a new transformer SOTA on FFHQ-1024 with an FID score of 4.174. These results show a 6.4% improvement on FFHQ-256 scores when compared to StyleGAN-XL with a 28% reduction in the number of parameters and 56% improvement in sampling throughput. Code and models will be open-sourced at https://github.com/SHI-Labs/StyleNAT .
翻译:长期追求的图像生成是一项长期追求但具有挑战性的任务,以有效的方式完成生成任务同样也是困难的。 研究人员往往试图创建一个“ 一刀切” 生成器, 其参数空间差异很少, 用于截然不同的数据集。 在这里, 我们展示了一个新的变压器框架, 称为StystealNAT, 以高效率和灵活性针对高质量的图像生成。 在我们模型的核心, 是一个精心设计的框架, 将注意力头分割, 以获取本地和全球信息, 这是通过使用“ 邻里注意( NA) 实现的。 由于不同负责人能够关注不同的可接受字段, 该模型能够更好地整合这些信息, 并以非常灵活的方式适应手头的数据。 StyNAT在 FFHQ256上取得了一个新的STAFTAFQQQQ/ 2. 46, 与StyleGAN- SweenSwin等革命型模型和FFFHQQQQQ 1024 新的变压器SOTA, 和FIDTTTTA- Lxx 的Sral- bal- breal- bromamal 和LVAxxxxxxxxxxxxxxx