Disentanglement is a useful property in representation learning which increases the interpretability of generative models such as Variational Auto-Encoders (VAE), Generative Adversarial Models, and their many variants. Typically in such models, an increase in disentanglement performance is traded-off with generation quality. In the context of latent space models, this work presents a representation learning framework that explicitly promotes disentanglement by encouraging orthogonal directions of variations. The proposed objective is the sum of an auto-encoder error term along with a Principal Component Analysis reconstruction error in the feature space. This has an interpretation of a Restricted Kernel Machine with the eigenvector matrix valued on the Stiefel manifold. Our analysis shows that such a construction promotes disentanglement by matching the principal directions in the latent space with the directions of orthogonal variation in data space. In an alternating minimization scheme, we use Cayley ADAM algorithm -- a stochastic optimization method on the Stiefel manifold along with the ADAM optimizer. Our theoretical discussion and various experiments show that the proposed model improves over many VAE variants in terms of both generation quality and disentangled representation learning.
翻译:解剖是代表学习中的一种有益属性,它增加了变异自动- Encarders(VAE)、基因对立模型等基因模型及其多种变异的可解释性。通常,在这种模型中,解脱性性能的增加与生成质量是相互交换的。在潜伏空间模型中,这项工作提出了一个代表学习框架,通过鼓励差异的正方位方向,明确促进解脱。拟议的目标是在特性空间中,自动解析错误术语和主元件分析重建错误之和。这是对一个受限制的凯尔内尔机的解释,与在Stiefel 方块上估价的静脉冲矩阵。我们的分析表明,这种构造通过将潜在空间的主要方向与数据空间的正方位变化方向相匹配,促进分离。在交替的最小化计划中,我们使用Cayley ADAM 算法 -- -- 与ADAM 优化者一起在Stiefel 方块上的一种随机优化方法。我们的理论讨论和各种实验表明,拟议的模型改进了许多质量变异的代的演示。