Disentangled representation learning aims to learn low-dimensional representations where each dimension corresponds to an underlying generative factor. While the Variational Auto-Encoder (VAE) is widely used for this purpose, most existing methods assume independence among factors, a simplification that does not hold in many real-world scenarios where factors are often interdependent and exhibit causal relationships. To overcome this limitation, we propose the Disentangled Causal Variational Auto-Encoder (DCVAE), a novel supervised VAE framework that integrates causal flows into the representation learning process, enabling the learning of more meaningful and interpretable disentangled representations. We evaluate DCVAE on both synthetic and real-world datasets, demonstrating its superior ability in causal disentanglement and intervention experiments. Furthermore, DCVAE outperforms state-of-the-art methods in various downstream tasks, highlighting its potential for learning true causal structures among factors.
翻译:暂无翻译