Generative models which use explicit density modeling (e.g., variational autoencoders, flow-based generative models) involve finding a mapping from a known distribution, e.g. Gaussian, to the unknown input distribution. This often requires searching over a class of non-linear functions (e.g., representable by a deep neural network). While effective in practice, the associated runtime/memory costs can increase rapidly, usually as a function of the performance desired in an application. We propose a much cheaper (and simpler) strategy to estimate this mapping based on adapting known results in kernel transfer operators. We show that our formulation enables highly efficient distribution approximation and sampling, and offers surprisingly good empirical performance that compares favorably with powerful baselines, but with significant runtime savings. We show that the algorithm also performs well in small sample size settings (in brain imaging).
翻译:使用显性密度模型的生成模型(例如变式自动电解码器、流基基因模型)需要从已知分布图(例如高森)到未知输入分布图(这往往需要搜索非线性功能类别(例如由深层神经网络代表的功能)。虽然在实践中有效,但相关的运行时间/模拟成本可以迅速增加,通常作为应用程序所期望的性能的函数。我们提出了一个价格低得多(更简单)的战略,以根据内核传输操作员的已知结果加以调整来估计这一绘图。我们表明,我们的配方能够非常高效的分布近似和取样,并提供了惊人的良好的经验性表现,与强大的基线相比是优异的,但可以节省大量运行时间。我们表明,算法在小样本大小的环境下(在脑成像中)也能很好地运行。