This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be efficiently computed. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9--11.0$\times$ more sample efficient and 9.4--32.4 times quicker than the previous method on various tasks. The source code is available at \url{https://github.com/yuanpeng16/EDCR}.
翻译:本文件提出了一种有效的方法,根据原始分布和新分布的有条件概率差异,从因果机制中了解分解的表示方式。我们将差异与模型的通用能力相近,使之符合标准的机器学习框架,并能有效计算。与依靠学习者适应新分布的速度的最先进的方法相比,拟议方法只要求评价模型的概括能力。我们从理论上解释了拟议方法的优点。我们进行的实验表明,拟议的技术比以往不同任务的方法效率高1.9-11.0美元,速度快9.4-32.4倍。源代码见https://github.com/yuumapeng16/EDCR}。