We propose a stable, parallel approach to train Wasserstein Conditional Generative Adversarial Neural Networks (W-CGANs) under the constraint of a fixed computational budget. Differently from previous distributed GANs training techniques, our approach avoids inter-process communications, reduces the risk of mode collapse and enhances scalability by using multiple generators, each one of them concurrently trained on a single data label. The use of the Wasserstein metric also reduces the risk of cycling by stabilizing the training of each generator. We illustrate the approach on the CIFAR10, CIFAR100, and ImageNet1k datasets, three standard benchmark image datasets, maintaining the original resolution of the images for each dataset. Performance is assessed in terms of scalability and final accuracy within a limited fixed computational time and computational resources. To measure accuracy, we use the inception score, the Frechet inception distance, and image quality. An improvement in inception score and Frechet inception distance is shown in comparison to previous results obtained by performing the parallel approach on deep convolutional conditional generative adversarial neural networks (DC-CGANs) as well as an improvement of image quality of the new images created by the GANs approach. Weak scaling is attained on both datasets using up to 2,000 NVIDIA V100 GPUs on the OLCF supercomputer Summit.
翻译:我们提出一种稳定、平行的方法,在固定计算预算的限制下培训瓦森斯坦条件生成反神经网络(W-CGANs),与以往分发的GANs培训技术不同,我们的方法避免了流程间通信,降低了模式崩溃的风险,并通过使用多发发电机提高可缩放性,每个发电机都同时接受单一数据标签的培训。使用瓦森斯坦指标,通过稳定每台发电机的培训,也减少了自行车循环的风险。我们展示了在CIFAR10、CIFAR100和图像Net1k数据集、三个标准基准图像数据集上采用的方法,维持了每个数据集图像的原始分辨率。在有限的固定计算时间和计算资源范围内,从可缩放性和最终准确性的角度评估了模式崩溃的风险,我们使用初始评分、Frechet起始距离和图像质量。与以前通过在深度变动有条件的CFAR100级网络上同时采用对G-CANA的升级方法,将G-C-C-C-C-NVISA的图像升级,将G-C-C-C-C-CVA的图像升级升级为G-C-C-C-SISA的升级,从而将GA的G-C-C-C-C-C-C-C-C-C-AN的图像升级的新图像升级升级升级升级升级升级升级的图像升级为G-C-C-C-SISA的图像升级的升级的图像,从而显示)。