Learning disentangled representations is regarded as a fundamental task for improving the generalization, robustness, and interpretability of generative models. However, measuring disentanglement has been challenging and inconsistent, often dependent on an ad-hoc external model or specific to a certain dataset. To address this, we present a method for quantifying disentanglement that only uses the generative model, by measuring the topological similarity of conditional submanifolds in the learned representation. This method showcases both unsupervised and supervised variants. To illustrate the effectiveness and applicability of our method, we empirically evaluate several state-of-the-art models across multiple datasets. We find that our method ranks models similarly to existing methods. We make ourcode publicly available at https://github.com/stanfordmlgroup/disentanglement.
翻译:学习分解的表达方式被视为改进基因模型的概括性、稳健性和可解释性的一项基本任务,然而,衡量分解方式具有挑战性和不一致性,往往取决于一个临时的外部模型或某个数据集特有的外部模型。为了解决这个问题,我们提出一种方法来量化分解方式,该方法仅使用基因模型,通过测量在学习的表达方式中有条件的次层侧面的表层相似性来测量。这种方法展示了未经监督和监督的变体。为了说明我们的方法的有效性和适用性,我们从经验上评估了多个数据集中的若干最先进的模型。我们发现,我们的方法将模型排序与现有方法相似。我们在https://github.com/stanfordmlgroup/dienttraclement上公开我们的代码。