The field of image generation through generative modelling is abundantly discussed nowadays. It can be used for various applications, such as up-scaling existing images, creating non-existing objects, such as interior design scenes, products or even human faces, and achieving transfer-learning processes. In this context, Generative Adversarial Networks (GANs) are a class of widely studied machine learning frameworks first appearing in the paper "Generative adversarial nets" by Goodfellow et al. that achieve the goal above. In our work, we reproduce and evaluate a novel variation of the original GAN network, the GANformer, proposed in "Generative Adversarial Transformers" by Hudson and Zitnick. This project aimed to recreate the methods presented in this paper to reproduce the original results and comment on the authors' claims. Due to resources and time limitations, we had to constrain the network's training times, dataset types, and sizes. Our research successfully recreated both variations of the proposed GANformer model and found differences between the authors' and our results. Moreover, discrepancies between the publication methodology and the one implemented, made available in the code, allowed us to study two undisclosed variations of the presented procedures.
翻译:通过基因建模生成图像的领域现在讨论得非常广泛,可用于各种应用,例如扩大现有图像的规模,创造非存在的物体,如内部设计场景、产品甚至人脸,以及实现转移学习过程。在这方面,基因反versarial网络(GANs)是经过广泛研究的机器学习框架的一类,首先出现在Goodfellow等人编写的题为“格德瑞对抗网”的文件中,它达到了上述目标。我们的工作是复制和评估原GAN网络的新变异,即Hudson和Zitnick在“General Aversarial变异器”中提议的GANARED,该项目旨在重新建立本文中介绍的复制原始结果和评论作者主张的方法。由于资源和时间的限制,我们不得不限制网络的培训时间、数据集类型和大小。我们的研究成功地重建了拟议的GANAREUT模型的变异,并发现作者与我们的结果有差异。此外,还在Hudson和Zitnick公司提出的“GANADERVERDERS”中发现,该项目旨在重新建立本文件中介绍的出版方法与所应用的两种变换码程序之间的差异,允许我们进行两种变码。</s>