Generating images according to natural language descriptions is a challenging task. Prior research has mainly focused to enhance the quality of generation by investigating the use of spatial attention and/or textual attention thereby neglecting the relationship between channels. In this work, we propose the Combined Attention Generative Adversarial Network (CAGAN) to generate photo-realistic images according to textual descriptions. The proposed CAGAN utilises two attention models: word attention to draw different sub-regions conditioned on related words; and squeeze-and-excitation attention to capture non-linear interaction among channels. With spectral normalisation to stabilise training, our proposed CAGAN improves the state of the art on the IS and FID on the CUB dataset and the FID on the more challenging COCO dataset. Furthermore, we demonstrate that judging a model by a single evaluation metric can be misleading by developing an additional model adding local self-attention which scores a higher IS, outperforming the state of the art on the CUB dataset, but generates unrealistic images through feature repetition.
翻译:根据自然语言描述生成图像是一项具有挑战性的任务。先前的研究主要侧重于通过调查空间注意力和/或文字关注的使用来提高生成质量,从而忽视了各频道之间的关系。在这项工作中,我们建议联合关注生成反对流网络(CAGAN)根据文字描述生成摄影现实图像。拟议的CAGAN使用两种关注模式:用字眼关注吸引以相关字词为条件的不同分区;用挤压和刺激的注意力捕捉不同频道之间的非线性互动。随着光谱的正常化以稳定化培训,我们提议的CAGAN改进了CUB数据集和FID以及更具挑战性的COCO数据集的CIS和FID的艺术状态。此外,我们证明,通过单一评价指标来判断一个模型,如果再开发一个模型,加上一个比CUB数据集高分数的本地自我关注度,则优于CUB数据集的艺术状态,但通过特征重复产生不切实际的图像,可能会产生误导。