Normalizing flows learn a diffeomorphic mapping between the target and base distribution, while the Jacobian determinant of that mapping forms another real-valued function. In this paper, we show that the Jacobian determinant mapping is unique for the given distributions, hence the likelihood objective of flows has a unique global optimum. In particular, the likelihood for a class of flows is explicitly expressed by the eigenvalues of the auto-correlation matrix of individual data point, and independent of the parameterization of neural network, which provides a theoretical optimal value of likelihood objective and relates to probabilistic PCA. Additionally, Jacobian determinant is a measure of local volume change and is maximized when MLE is used for optimization. To stabilize normalizing flows training, it is required to maintain a balance between the expansiveness and contraction of volume, meaning Lipschitz constraint on the diffeomorphic mapping and its inverse. With these theoretical results, several principles of designing normalizing flow were proposed. And numerical experiments on highdimensional datasets (such as CelebA-HQ 1024x1024) were conducted to show the improved stability of training.
翻译:目标分布和基分布之间,正态流的正常化过程学习了一种光度分布图,而该映射的雅各基决定因素则形成另一种实际价值的功能。在本文中,我们表明雅各基决定因素图对特定分布的独特性是独特的,因此流动的可能目标具有独特的全球最佳性。特别是,单项数据点的自动光谱矩阵的自然价值明确显示了一种流动的可能性,并且独立于神经网络的参数化,该参数化提供了概率目标的理论最佳值,并且与概率性五氯苯有关。此外,雅各基是局部量变化的尺度,在将 MLE 用于优化时是最大化的。为了稳定流量的正常化培训,需要保持流量的宽度和量的收缩之间的平衡,这意味着Lipschitz对地貌映射及其反向的制约。有了这些理论结果,提出了设计正常流动的若干原则。在高度数据集(例如CelibA-HQ 1024x1024)上进行了数字实验,以显示培训的稳定性的改善。