There have been many recent advances in representation learning; however, unsupervised representation learning can still struggle with model identification issues. Variational Auto-Encoders (VAEs) and their extensions such as $\beta$-VAEs have been shown to locally align latent variables with PCA directions, which can help to improve model disentanglement under some conditions. Borrowing inspiration from Independent Component Analysis (ICA) and sparse coding, we propose applying an $L_1$ loss to the VAE's generative Jacobian during training to encourage local latent variable alignment with independent factors of variation in the data. We demonstrate our results on a variety of datasets, giving qualitative and quantitative results using information theoretic and modularity measures that show our added $L_1$ cost encourages local axis alignment of the latent representation with individual factors of variation.
翻译:在代表性学习方面最近取得了许多进展;然而,未经监督的代表性学习仍然难以解决示范识别问题。变式自动计算器(VAE)及其扩展($\beeta$-VAE)已经表明,这些变式自动计算器(VAE)及其扩展($\beeta$-VAE)能够在当地使潜在变量与五氯苯甲醚的方向保持一致,这有助于在某些条件下改善模型的脱节。我们建议从独立组成部分分析(ICA)和稀疏的编码中借用灵感,在培训期间对VAE的基因化的Jacobian适用1美元的损失,以鼓励当地潜在变量与数据差异的独立因素保持一致。我们展示了我们在各种数据集方面的结果,利用信息理论和模块化措施提供了质量和数量结果,这些措施显示了我们增加的1美元成本,鼓励潜在代表与个别变异因素的本地轴对齐。