Training of generative adversarial networks (GANs) is known for its difficulty to converge. This paper first confirms analytically one of the culprits behind this convergence issue: the lack of convexity in GANs objective functions, hence the well-posedness problem of GANs models. Then, it proposes a stochastic control approach for hyper-parameters tuning in GANs training. In particular, it presents an optimal solution for adaptive learning rate which depends on the convexity of the objective function, and builds a precise relation between improper choices of learning rate and explosion in GANs training. Finally, empirical studies demonstrate that training algorithms incorporating this selection methodology outperform standard ones.
翻译:已知的基因对抗网络(GANs)培训的难点在于难以汇合,本文件首先从分析角度确认了这一趋同问题的罪魁祸首之一:GANs客观功能缺乏共性,因此GANs模型的稳健性问题。然后,它提出了在GANs培训中进行超参数调的随机控制方法。特别是,它为适应性学习率提供了一个最佳解决方案,这取决于目标功能的共性,并在GANs培训中不适当的学习率选择与爆炸之间建立起了精确的关系。最后,经验研究表明,培训算法将这种选择方法纳入的算法优于标准法。