We consider distance functions between conditional distributions functions. We focus on the Wasserstein metric and its Gaussian case known as the Frechet Inception Distance (FID).We develop conditional versions of these metrics, and analyze their relations. Then, we numerically compare the metrics inthe context of performance evaluation of conditional generative models. Our results show that the metrics are similar in classical models which are less susceptible to conditional collapse. But the conditional distances are more informative in modern unsuper-vised, semisupervised and unpaired models where learning the relations between the inputs and outputs is the main challenge.
翻译:我们考虑的是有条件分布功能之间的距离功能。 我们关注瓦塞斯泰因指标及其被称为Frechet Inception Contreal(FID)的高斯安案例。 我们开发了这些指标的有条件版本,并分析了它们之间的关系。 然后,我们在对有条件的基因化模型进行绩效评估时,用数字比较了这些衡量标准。 我们的结果表明,这些衡量标准在传统模型中是相似的,这些模型较不易有条件崩溃。 但是,在现代的、不受监督的、半监督的和未保密的模型中,有条件的距离信息更加丰富,在这些模型中,学习投入和产出之间的关系是主要的挑战。