Training generative adversarial networks (GAN) in a distributed fashion is a promising technology since it is contributed to training GAN on a massive of data efficiently in real-world applications. However, GAN is known to be difficult to train by SGD-type methods (may fail to converge) and the distributed SGD-type methods may also suffer from massive amount of communication cost. In this paper, we propose a {distributed GANs training algorithm with quantized gradient, dubbed DQGAN,} which is the first distributed training method with quantized gradient for GANs. The new method trains GANs based on a specific single machine algorithm called Optimistic Mirror Descent (OMD) algorithm, and is applicable to any gradient compression method that satisfies a general $\delta$-approximate compressor. The error-feedback operation we designed is used to compensate for the bias caused by the compression, and moreover, ensure the convergence of the new method. Theoretically, we establish the non-asymptotic convergence of {DQGAN} algorithm to first-order stationary point, which shows that the proposed algorithm can achieve a linear speedup in the parameter server model. Empirically, our experiments show that our {DQGAN} algorithm can reduce the communication cost and save the training time with slight performance degradation on both synthetic and real datasets.
翻译:以分布式方式培训敌机网络(GAN)是一个很有希望的技术,因为它有助于培训GAN在现实应用中高效地提供大量数据。然而,据知GAN很难用SGD型方法(可能无法趋同)来培训,分布式SGD型方法也可能受到大量的通信成本的影响。在本文中,我们提出了一个{分布式GANS培训算法,使用四分梯度梯度,称为DQGAN},这是第一个分布式培训方法,对GANs进行了量化梯度。新的方法根据一种特定的单一机器算法,即“乐观镜像”算法,对GANs进行了培训。据知,GANS(OMD)型方法难以用SGD(OMD)算法进行培训,而且适用于任何满足一般价格($delta$-appressors)的梯度压缩法方法。我们设计的错误反弹回操作是为了弥补压缩造成的偏差,此外,确保新方法的趋同。理论上,我们建立了{DQAN}的不依赖性合成梯度趋同模式,将GAN的算算算算算算算算算方法培训到第一个阶级的模型,可以降低我们的磁性模型。