Learning disentangled representations is important in representation learning, aiming to learn a low dimensional representation of data where each dimension corresponds to one underlying generative factor. Due to the possibility of causal relationships between generative factors, causal disentangled representation learning has received widespread attention. In this paper, we first propose a new flows that can incorporate causal structure information into the model, called causal flows. Based on the variational autoencoders(VAE) commonly used in disentangled representation learning, we design a new model, CF-VAE, which enhances the disentanglement ability of the VAE encoder by utilizing the causal flows. By further introducing the supervision of ground-truth factors, we demonstrate the disentanglement identifiability of our model. Experimental results on both synthetic and real datasets show that CF-VAE can achieve causal disentanglement and perform intervention experiments. Moreover, CF-VAE exhibits outstanding performance on downstream tasks and has the potential to learn causal structure among factors.
翻译:学习分离表示在表示学习中非常重要,旨在学习数据的低维表示,其中每个维度对应一个基础生成因子。由于生成因子之间存在因果关系的可能性,因果分离表示学习受到广泛关注。在本文中,我们首先提出了一种新的流,可以将因果结构信息纳入模型中,称为因果流。基于在分离表示学习中常用的变分自编码器(VAE),我们设计了一个新的模型CF-VAE,通过利用因果流增强了VAE编码器的分离能力。通过进一步引入地面真实因子的监督,我们证明了我们模型的分离可识别性。在合成数据集和真实数据集上的实验结果表明,CF-VAE可以实现因果分离并执行干预实验。此外,CF-VAE在下游任务中表现出优秀的性能,并有学习因果结构的潜力。