Adversarial nets have proved to be powerful in various domains including generative modeling (GANs), transfer learning, and fairness. However, successfully training adversarial nets using first-order methods remains a major challenge. Typically, careful choices of the learning rates are needed to maintain the delicate balance between the competing networks. In this paper, we design a novel learning rate scheduler that dynamically adapts the learning rate of the adversary to maintain the right balance. The scheduler is driven by the fact that the loss of an ideal adversarial net is a constant known a priori. The scheduler is thus designed to keep the loss of the optimized adversarial net close to that of an ideal network. We run large-scale experiments to study the effectiveness of the scheduler on two popular applications: GANs for image generation and adversarial nets for domain adaptation. Our experiments indicate that adversarial nets trained with the scheduler are less likely to diverge and require significantly less tuning. For example, on CelebA, a GAN with the scheduler requires only one-tenth of the tuning budget needed without a scheduler. Moreover, the scheduler leads to statistically significant improvements in model quality, reaching up to $27\%$ in Frechet Inception Distance for image generation and $3\%$ in test accuracy for domain adaptation.
翻译:事实证明,Adversarial网在基因模型(GANs)、转移学习和公平等各个领域都非常有力,但是,使用一阶方法成功培训对抗性网仍然是一项重大挑战。通常需要谨慎选择学习率,以维持相互竞争的网络之间的微妙平衡。在本文中,我们设计了一个新的学习进度表,动态地调整对手的学习速度以保持正确的平衡。计划表的驱动因素是,失去理想的对抗性网是一个经常的先天已知。因此,计划表设计的目的是将最佳对抗性网的损失保持在接近理想网络的水平上。我们进行了大规模实验,以研究两种流行应用的排期表的有效性:图像生成的GANs和领域适应的对抗性网。我们的实验表明,与排表员培训的对抗性网不太可能出现差异,需要大大降低调和。例如,在CeebA上,一个有排表的GAN只需要十分之一的调整预算,而不需要一个排表。此外,计划表显示,在创制模型的模型质量方面,在Slaverialxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx