Generative adversarial networks (GANs), a class of distribution-learning methods based on a two-player game between a generator and a discriminator, can generally be formulated as a minmax problem based on the variational representation of a divergence between the unknown and the generated distributions. We introduce structure-preserving GANs as a data-efficient framework for learning distributions with additional structure such as group symmetry, by developing new variational representations for divergences. Our theory shows that we can reduce the discriminator space to its projection on the invariant discriminator space, using the conditional expectation with respect to the sigma-algebra associated to the underlying structure. In addition, we prove that the discriminator space reduction must be accompanied by a careful design of structured generators, as flawed designs may easily lead to a catastrophic "mode collapse" of the learned distribution. We contextualize our framework by building symmetry-preserving GANs for distributions with intrinsic group symmetry, and demonstrate that both players, namely the equivariant generator and invariant discriminator, play important but distinct roles in the learning process. Empirical experiments and ablation studies across a broad range of data sets, including real-world medical imaging, validate our theory, and show our proposed methods achieve significantly improved sample fidelity and diversity -- almost an order of magnitude measured in Fr\'echet Inception Distance -- especially in the small data regime.
翻译:基于发源人与歧视者之间双玩游戏的一类分配-学习方法(GANs),即基于发源人与歧视者之间双玩游戏的一类分配-学习方法,一般可被设计成一个小问题,其基础是未知和生成分布之间的差异的变异性代表。我们引入结构-保留GANs,作为学习分布的数据效率框架,并增加结构对称性,例如群体分布的新的变异性表示。我们的理论表明,我们可以利用与基本结构相关的正方格-方格-方格的有条件期望,将歧视-空间缩小必须伴之以精心设计结构型的生成者,因为有缺陷的设计很容易导致所学分布的灾难性“模式崩溃 ” 。我们通过为本组的对称性分布建立对称性保留GANs,并表明两个角色,即变异性生成者和变异性歧视者,都发挥重要但独特的作用。此外,我们证明,在学习过程中,在深度研究中,包括测算性实验和真实性学程中,我们的拟议数据序列实验和演算中,一个显著的排序方法。