We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the critic is vital to having a sensible loss function, and devise a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers. The new loss function is provably continuous, and experiments show that it stabilizes and accelerates training, giving image generation models that outperform state-of-the art methods on $160 \times 160$ CelebA and $64 \times 64$ unconditional ImageNet.
翻译:我们提出一种原则性方法,使GAN式模型的批评者以梯度为基础实现正规化,这些批评者经过对抗性地优化最大平均值差异的内核。 我们表明,控制批评者的梯度对于具有合理的损失功能至关重要,并设计出一种方法,以不增加成本的方式执行精确的分析性梯度限制,而与基于添加剂管理者的现有近似技术相比,不增加成本。 新的损失功能可以持续进行,实验表明,它稳定并加快了培训,给图像生成模型以超过160美元160元CelibA和64美元无条件图像网络的先进方法。