Generative Adversarial Networks (GANs) have emerged as useful generative models, which are capable of implicitly learning data distributions of arbitrarily complex dimensions. However, the training of GANs is empirically well-known for being highly unstable and sensitive. The loss functions of both the discriminator and generator concerning their parameters tend to oscillate wildly during training. Different loss functions have been proposed to stabilize the training and improve the quality of images generated. In this paper, we perform an empirical study on the impact of several loss functions on the performance of standard GAN models, Deep Convolutional Generative Adversarial Networks (DCGANs). We introduce a new improvement that employs a relativistic discriminator to replace the classical deterministic discriminator in DCGANs and implement a margin cosine loss function for both the generator and discriminator. This results in a novel loss function, namely Relativistic Margin Cosine Loss (RMCosGAN). We carry out extensive experiments with four datasets: CIFAR-$10$, MNIST, STL-$10$, and CAT. We compare RMCosGAN performance with existing loss functions based on two metrics: Frechet inception distance and inception score. The experimental results show that RMCosGAN outperforms the existing ones and significantly improves the quality of images generated.
翻译:在培训过程中,歧视者和产生者在参数方面的损失功能往往大不相同。提出了不同的损失功能,以稳定培训,提高所产生图像的质量。在本文件中,我们对一些损失功能对标准GAN模型、深革命基因反转网络(DCGANs)的性能的影响进行了经验性研究。我们引入了新的改进,采用相对性歧视器取代DCGANs的典型确定性歧视器,对发电机和产生者实施差值损失功能。这导致一种新的损失功能,即 " 相对磁石灰损失 " (RMCOSGAN ) 。我们用四种数据集进行了广泛的实验:CIFAR-100美元、MONIST-10美元、STL-10美元,以及CAT。我们用一种新的改进,采用相对性歧视器取代了DCGANs的典型确定性歧视器,对发电机和产生者实施差值损失功能。我们用四个数据集进行了广泛的实验:CARAAN-100美元、MNIST-10美元、ST-PER-10美元,以及CATs 将现有RAFMA的测算结果比了RG 。我们用RAFAFA的成绩展示了RG的成绩。我们用RAFABA的成绩展示了RA。