Generative models can have distinct mode of failures like mode dropping and low quality samples, which cannot be captured by a single scalar metric. To address this, recent works propose evaluating generative models using precision and recall, where precision measures quality of samples and recall measures the coverage of the target distribution. Although a variety of discrepancy measures between the target and estimated distribution are used to train generative models, it is unclear what precision-recall trade-offs are achieved by various choices of the discrepancy measures. In this paper, we show that achieving a specified precision-recall trade-off corresponds to minimising -divergences from a family we call the {\em PR-divergences }. Conversely, any -divergence can be written as a linear combination of PR-divergences and therefore correspond to minimising a weighted precision-recall trade-off. Further, we propose a novel generative model that is able to train a normalizing flow to minimise any -divergence, and in particular, achieve a given precision-recall trade-off.
翻译:为了解决这个问题,最近的工作提议利用精确度量样品质量和回顾度量目标分布范围的方法来评价基因模型。虽然使用指标和估计分布之间的各种差异性衡量方法来训练基因模型,但还不清楚对差异性衡量方法的不同选择能达到何种精确-召回权衡。在本文中,我们表明,实现特定精确-召回权衡与我们称为 ~PR-引力的家庭的最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-重度-最小度-最小度-临界度-最小度-最小度-最小度-最小度-临界值-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小值-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小度-最小值-最小度-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小值-最小