Deep generative models are becoming widely used across science and industry for a variety of purposes. A common challenge is achieving a precise implicit or explicit representation of the data probability density. Recent proposals have suggested using classifier weights to refine the learned density of deep generative models. We extend this idea to all types of generative models and show how latent space refinement via iterated generative modeling can circumvent topological obstructions and improve precision. This methodology also applies to cases were the target model is non-differentiable and has many internal latent dimensions which must be marginalized over before refinement. We demonstrate our Latent Space Refinement (LaSeR) protocol on a variety of examples, focusing on the combinations of Normalizing Flows and Generative Adversarial Networks.
翻译:深度基因模型正在科学和工业中广泛用于各种目的。一个共同的挑战是实现数据概率密度的精确隐含或明确的表示。最近的建议建议使用分类权重来改进深层基因模型的学习密度。我们将这一想法推广到所有类型的基因模型,并表明通过迭代基因模型进行潜在的空间改进能够绕过地形障碍并改进精确度。这种方法也适用于一些情况,因为目标模型是非差异性的,而且有许多内在潜伏层面,在改进之前必须将其边缘化。我们展示了我们关于各种实例的“冷层空间改进”协议,侧重于正常流动和基因辅助网络的组合。