Normalizing flow (NF) has gained popularity over traditional maximum likelihood based methods due to its strong capability to model complex data distributions. However, the standard approach, which maps the observed data to a normal distribution, has difficulty in handling data distributions with multiple relatively isolated modes. To overcome this issue, we propose a new framework based on variational latent representation to improve the practical performance of NF. The idea is to replace the standard normal latent variable with a more general latent representation, jointly learned via Variational Bayes. For example, by taking the latent representation as a discrete sequence, our framework can learn a Transformer model that generates the latent sequence and an NF model that generates continuous data distribution conditioned on the sequence. The resulting method is significantly more powerful than the standard normalization flow approach for generating data distributions with multiple modes. Extensive experiments have shown the advantages of NF with variational latent representation.
翻译:流动正常化(NF)由于具有建立复杂数据分布模型的强大能力,在传统的最大可能性基础上采用传统方法,因此越来越受欢迎。然而,将观察到的数据映射成正常分布的标准方法,难以以多个相对孤立的模式处理数据分布。为了克服这一问题,我们提议了一个基于可变潜在代表的新框架,以改善NF的实际表现。其想法是用一种更一般的潜在代表方式取代标准的普通潜在变量,通过Variational Bayes共同学习。例如,通过将潜在代表方式作为一个离散序列,我们的框架可以学习一种生成潜在序列的变异器模型和产生以序列为条件的连续数据分布的NF模型。由此产生的方法比以多种模式生成数据分布的标准正常化流程方法要强大得多。广泛的实验显示了NF与变异潜在代表方式的优势。