Generative Adversarial Networks GANs are algorithmic architectures that use two neural networks, pitting one against the opposite so as to come up with new, synthetic instances of data that can pass for real data. Training a GAN is a challenging problem which requires us to apply advanced techniques like hyperparameter tuning, architecture engineering etc. Many different losses, regularization and normalization schemes, network architectures have been proposed to solve this challenging problem for different types of datasets. It becomes necessary to understand the experimental observations and deduce a simple theory for it. In this paper, we perform empirical experiments using parameterized synthetic datasets to probe what traits affect learnability.
翻译:GAN是使用两种神经网络的算法结构,它们相互对立,以便提出新的合成数据实例,而这些数据又能够传递到真实数据中。培训GAN是一个具有挑战性的问题,需要我们应用超参数调制、建筑工程等先进技术。许多不同的损失、正规化和正常化计划、网络结构建议解决不同类型数据集的这一具有挑战性的问题。有必要理解实验性观测结果,并为它推导出一个简单的理论。在本文中,我们使用参数化合成数据集进行实验,以探测影响学习的特征。