Normalizing flows map an independent set of latent variables to their samples using a bijective transformation. Despite the exact correspondence between samples and latent variables, their high level relationship is not well understood. In this paper we characterize the geometric structure of flows using principal manifolds and understand the relationship between latent variables and samples using contours. We introduce a novel class of normalizing flows, called principal manifold flows (PF), whose contours are its principal manifolds, and a variant for injective flows (iPF) that is more efficient to train than regular injective flows. PFs can be constructed using any flow architecture, are trained with a regularized maximum likelihood objective and can perform density estimation on all of their principal manifolds. In our experiments we show that PFs and iPFs are able to learn the principal manifolds over a variety of datasets. Additionally, we show that PFs can perform density estimation on data that lie on a manifold with variable dimensionality, which is not possible with existing normalizing flows.
翻译:使用双向转换将流流图向样本绘制独立的一组潜在变量。 尽管样本和潜在变量之间有精确的对应关系, 但其高水平关系却不甚为人理解。 在本文中, 我们用主要方块来描述流动的几何结构, 并理解潜在变量与使用等深线的样本之间的关系。 我们引入了一种新型的正常流动类别, 称为主要方块流, 其轮廓是其主要方块, 以及一种比正常注射流更高效的注射流( iPF) 变量。 可以使用任何流动结构来构建 PFS, 并具有固定的最大可能性目标, 并且能够对其所有主要方块进行密度估计 。 在我们的实验中, 我们证明 PF和 IPF 能够从各种数据集中学习主要方块。 此外, 我们证明 PFS 可以对包含多维度的数据进行密度估计, 而这与现有的正常流量是不可能的 。