I describe a trick for training flow models using a prescribed rule as a surrogate for maximum likelihood. The utility of this trick is limited for non-conditional models, but an extension of the approach, applied to maximum likelihood of the joint probability distribution of data and conditioning information, can be used to train sophisticated \textit{conditional} flow models. Unlike previous approaches, this method is quite simple: it does not require explicit knowledge of the distribution of conditions, auxiliary networks or other specific architecture, or additional loss terms beyond maximum likelihood, and it preserves the correspondence between latent and data spaces. The resulting models have all the properties of non-conditional flow models, are robust to unexpected inputs, and can predict the distribution of solutions conditioned on a given input. They come with guarantees of prediction representativeness and are a natural and powerful way to solve highly uncertain problems. I demonstrate these properties on easily visualized toy problems, then use the method to successfully generate class-conditional images and to reconstruct highly degraded images via super-resolution.
翻译:我描述了使用规定规则培训流程模型的伎俩,以规定规则为最大可能性的替代条件。这种伎俩的效用仅限于不附带条件的模型,但这一方法的延伸,适用于数据和调控信息共同概率分配的最大可能性,可以用来培训精密的\textit{条件}流程模型。与以前的方法不同,这种方法非常简单:它并不要求明确了解条件、辅助网络或其他特定结构的分布情况,也不要求超出最大可能性的额外损失条件,它保存了潜在空间和数据空间之间的对应关系。由此产生的模型具有不附带条件的流动模型的所有特性,对意外输入具有很强的特性,可以预测以特定输入为条件的解决办法的分布情况。它们具有预测代表性的保证,是解决高度不确定问题的自然和有力的方法。我用易于视觉化的玩具问题来展示这些特性,然后使用这种方法成功生成高质图像,并通过超级分辨率重建高度图像。