Implicit generative models, which do not return likelihood values, such as generative adversarial networks and diffusion models, have become prevalent in recent years. While it is true that these models have shown remarkable results, evaluating their performance is challenging. This issue is of vital importance to push research forward and identify meaningful gains from random noise. Currently, heuristic metrics such as the Inception score (IS) and Frechet Inception Distance (FID) are the most common evaluation metrics, but what they measure is not entirely clear. Additionally, there are questions regarding how meaningful their score actually is. In this work, we study the evaluation metrics of generative models by generating a high-quality synthetic dataset on which we can estimate classical metrics for comparison. Our study shows that while FID and IS do correlate to several f-divergences, their ranking of close models can vary considerably making them problematic when used for fain-grained comparison. We further used this experimental setting to study which evaluation metric best correlates with our probabilistic metrics. Lastly, we look into the base features used for metrics such as FID.
翻译:虽然这些模型确实显示了显著的成果,但评估其业绩是具有挑战性的。这个问题对于推动研究并查明随机噪音带来的有意义的收益至关重要。目前,诸如受孕分(IS)和Frechet 受孕距离(FID)等超光速指数是最常见的评价指标,但它们的衡量方法并不完全清楚。此外,还有关于它们的实际得分是否有意义的问题。在这项工作中,我们通过制作一个高质量的合成数据集来研究基因模型的评价指标,我们可以据此对古典指标进行估计,以便进行比较。我们的研究显示,虽然FID和IS的确与若干微光度相关,但其近距离模型的排名可能差异很大,因此在用于法氏比较时会使它们产生问题。我们进一步利用这个实验环境来研究哪些评价指标与我们的比较性指标最有关系。最后,我们研究了用于诸如FID等衡量标准的基础特征。