Normalizing flows are powerful non-parametric statistical models that function as a hybrid between density estimators and generative models. Current learning algorithms for normalizing flows assume that data points are sampled independently, an assumption that is frequently violated in practice, which may lead to erroneous density estimation and data generation. We propose a likelihood objective of normalizing flows incorporating dependencies between the data points, for which we derive a flexible and efficient learning algorithm suitable for different dependency structures. We show that respecting dependencies between observations can improve empirical results on both synthetic and real-world data.
翻译:正常化流动是强大的非参数统计模型,在密度估计和基因模型之间起着混合作用。当前正常化流动的学习算法假定数据点是独立抽样的,这一假设在实践中经常被违反,可能导致错误的密度估计和数据生成。我们提出一个可能的目标,即将数据点之间的依赖性纳入正常流动,为此,我们得出一个适合不同依赖结构的灵活有效的学习算法。我们表明,尊重观测之间的依赖性可以改善合成数据和现实世界数据的经验结果。