Generative Adversarial Networks (GANs) have been proven hugely successful in image generation tasks, but GAN training has the problem of instability. Many works have improved the stability of GAN training by manually modifying the GAN architecture, which requires human expertise and extensive trial-and-error. Thus, neural architecture search (NAS), which aims to automate the model design, has been applied to search GANs on the task of unconditional image generation. The early NAS-GAN works only search generators for reducing the difficulty. Some recent works have attempted to search both generator (G) and discriminator (D) to improve GAN performance, but they still suffer from the instability of GAN training during the search. To alleviate the instability issue, we propose an efficient two-stage evolutionary algorithm (EA) based NAS framework to discover GANs, dubbed \textbf{EAGAN}. Specifically, we decouple the search of G and D into two stages and propose the weight-resetting strategy to improve the stability of GAN training. Besides, we perform evolution operations to produce the Pareto-front architectures based on multiple objectives, resulting in a superior combination of G and D. By leveraging the weight-sharing strategy and low-fidelity evaluation, EAGAN can significantly shorten the search time. EAGAN achieves highly competitive results on the CIFAR-10 (IS=8.81$\pm$0.10, FID=9.91) and surpasses previous NAS-searched GANs on the STL-10 dataset (IS=10.44$\pm$0.087, FID=22.18).
翻译:事实证明,Adversarial网络(GANs)在图像生成任务方面非常成功,但GAN培训存在不稳定问题,许多工程通过手动修改GAN结构改善了GAN培训的稳定性,而GAN结构需要人的专门知识和广泛的试验和试验。因此,神经结构搜索(NAS)旨在将模型设计自动化,用于搜索GANs无条件图像生成的任务。早期NAS-GAN只为减少难度而工作搜索发电机。最近的一些工程试图搜索发电机(G)和导师(D)来提高GAN的绩效,但是由于GAN培训的不稳定,这些工程仍然受到GAN培训的不稳定影响。为了缓解不稳定问题,我们提议以NAS框架为基础,以发现GANs, 调制成\textbf{EAGAN}。具体地说,我们将G和D的搜索分为两个阶段,并提出了改善GAN培训稳定性的权重调整战略。此外,我们还进行进进化作业,使ER-AS的精度达到E-AG的高级搜索。