disentangled-representation-papers

2018 年 9 月 12 日 CreateAMind
disentangled-representation-papers

https://github.com/sootlasten/disentangled-representation-papers



This is a curated list of papers on disentangled (and an occasional "conventional") representation learning. Within each year, the papers are ordered from newest to oldest. I've scored the importance/quality of each paper (in my own personal opinion) on a scale of 1 to 3, as indicated by the number of stars in front of each entry in the list. If stars are replaced by a question mark, then it represents a paper I haven't fully read yet, in which case I'm unable to judge its quality.

2018

  • ? Learning Deep Representations by Mutual Information Estimation and Maximization (Aug, Hjelm et. al.) [paper]

  • ? Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies (Aug, Achille et. al.) [paper]

  • ? Insights on Representational Similarity in Neural Networks with Canonical Correlation (Jun, Morcos et. al.) [paper]

  • ** Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects (Jun, Kosiorek et. al.) [paper]

  • *** Neural Scene Representation and Rendering (Jun, Eslami et. al.) [paper]

  • ? Image-to-image translation for cross-domain disentanglement (May, Gonzalez-Garcia et. al.) [paper]

  • * Learning Disentangled Joint Continuous and Discrete Representations (May, Dupont) [paper] [code]

  • ? DGPose: Disentangled Semi-supervised Deep Generative Models for Human Body Analysis (Apr, Bem et. al.) [paper]

  • ? Structured Disentangled Representations (Apr, Esmaeili et. al.) [paper]

  • ** Understanding disentangling in β-VAE (Apr, Burgess et. al.) [paper]

  • ? On the importance of single directions for generalization (Mar, Morcos et. al.) [paper]

  • ** Unsupervised Representation Learning by Predicting Image Rotations (Mar, Gidaris et. al.) [paper]

  • ? Disentangled Sequential Autoencoder (Mar, Li & Mandt) [paper]

  • *** Isolating Sources of Disentanglement in Variational Autoencoders (Mar, Chen et. al.) [paper] [code]

  • ** Disentangling by Factorising (Feb, Kim & Mnih) [paper]

  • ** Disentangling the Independently Controllable Factors of Variation by Interacting with the World (Feb, Bengio's group) [paper]

  • ? On the Latent Space of Wasserstein Auto-Encoders (Feb, Rubenstein et. al.) [paper]

  • ? Auto-Encoding Total Correlation Explanation (Feb, Gao et. al.) [paper]

  • ? Fixing a Broken ELBO (Feb, Alemi et. al.) [paper]

  • * Learning Disentangled Representations with Wasserstein Auto-Encoders (Feb, Rubenstein et. al.) [paper]

  • ? Rethinking Style and Content Disentanglement in Variational Autoencoders (Feb, Shu et. al.) [paper]

  • ? A Framework for the Quantitative Evaluation of Disentangled Representations (Feb, Eastwood & Williams) [paper]

2017

  • ? The β-VAE's Implicit Prior (Dec, Hoffman et. al.) [paper]

  • ** The Multi-Entity Variational Autoencoder (Dec, Nash et. al.) [paper]

  • ? Learning Independent Causal Mechanisms (Dec, Parascandolo et. al.) [paper]

  • ? Variational Inference of Disentangled Latent Concepts from Unlabeled Observations (Nov, Kumar et. al.) [paper]

  • * Neural Discrete Representation Learning (Nov, Oord et. al.) [paper]

  • ? Disentangled Representations via Synergy Minimization (Oct, Steeg et. al.) [paper]

  • ? Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data (Sep, Hsu et. al.) [paper] [code]

  • * Experiments on the Consciousness Prior (Sep, Bengio & Fedus) [paper]

  • ** The Consciousness Prior (Sep, Bengio) [paper]

  • ? Disentangling Motion, Foreground and Background Features in Videos (Jul, Lin. et. al.) [paper]

  • * SCAN: Learning Hierarchical Compositional Visual Concepts (Jul, Higgins. et. al.) [paper]

  • *** DARLA: Improving Zero-Shot Transfer in Reinforcement Learning (Jul, Higgins et. al.) [paper]

  • ** Unsupervised Learning via Total Correlation Explanation (Jun, Ver Steeg) [paper] [code]

  • ? PixelGAN Autoencoders (Jun, Makhzani & Frey) [paper]

  • ? Emergence of Invariance and Disentanglement in Deep Representations (Jun, Achille & Soatto) [paper]

  • ** A Simple Neural Network Module for Relational Reasoning (Jun, Santoro et. al.) [paper]

  • ? Learning Disentangled Representations with Semi-Supervised Deep Generative Models (Jun, Siddharth, et al.) [paper]

  • ? Unsupervised Learning of Disentangled Representations from Video (May, Denton & Birodkar) [paper]

2016

  • ** Deep Variational Information Bottleneck (Dec, Alemi et. al.) [paper]

  • *** β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework (Nov, Higgins et. al.) [paper] [code]

  • ? Disentangling factors of variation in deep representations using adversarial training (Nov, Mathieu et. al.) [paper]

  • ** Information Dropout: Learning Optimal Representations Through Noisy Computation (Nov, Achille & Soatto) [paper]

  • ** InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets (Jun, Chen et. al.) [paper]

  • *** Attend, Infer, Repeat: Fast Scene Understanding with Generative Models (Mar, Eslami et. al.) [paper]

  • *** Building Machines That Learn and Think Like People (Apr, Lake et. al.) [paper]

  • * Understanding Visual Concepts with Continuation Learning (Feb, Whitney et. al.) [paper]

  • ? Disentangled Representations in Neural Models (Feb, Whitney) [paper]

Older work

  • ** Deep Convolutional Inverse Graphics Network (2015, Kulkarni et. al.) [paper]

  • ? Learning to Disentangle Factors of Variation with Manifold Interaction (2014, Reed et. al.) [paper]

  • *** Representation Learning: A Review and New Perspectives (2013, Bengio et. al.) [paper]

  • ? Disentangling Factors of Variation via Generative Entangling (2012, Desjardinis et. al.) [paper]

  • *** Transforming Auto-encoders (2011, Hinton et. al.) [paper]

  • ** Learning Factorial Codes By Predictability Minimization (1992, Schmidhuber) [paper]

  • *** Self-Organization in a Perceptual Network (1988, Linsker) [paper]

Talks

  • Building Machines that Learn & Think Like People (2018, Tenenbaum) [youtube]

  • From Deep Learning of Disentangled Representations to Higher-level Cognition (2018, Bengio) [youtube]

  • What is wrong with convolutional neural nets? (2017, Hinton) [youtube]





登录查看更多
24

相关内容

表示学习是通过利用训练数据来学习得到向量表示,这可以克服人工方法的局限性。 表示学习通常可分为两大类,无监督和有监督表示学习。大多数无监督表示学习方法利用自动编码器(如去噪自动编码器和稀疏自动编码器等)中的隐变量作为表示。 目前出现的变分自动编码器能够更好的容忍噪声和异常值。 然而,推断给定数据的潜在结构几乎是不可能的。 目前有一些近似推断的策略。 此外,一些无监督表示学习方法旨在近似某种特定的相似性度量。提出了一种无监督的相似性保持表示学习框架,该框架使用矩阵分解来保持成对的DTW相似性。 通过学习保持DTW的shaplets,即在转换后的空间中的欧式距离近似原始数据的真实DTW距离。有监督表示学习方法可以利用数据的标签信息,更好地捕获数据的语义结构。 孪生网络和三元组网络是目前两种比较流行的模型,它们的目标是最大化类别之间的距离并最小化了类别内部的距离。
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
100+阅读 · 2020年3月18日
专知会员服务
106+阅读 · 2020年3月12日
17篇知识图谱Knowledge Graphs论文 @AAAI2020
专知会员服务
145+阅读 · 2020年2月13日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
66+阅读 · 2019年10月9日
知识图谱本体结构构建论文合集
专知会员服务
73+阅读 · 2019年10月9日
RL 真经
CreateAMind
4+阅读 · 2018年12月28日
Disentangled的假设的探讨
CreateAMind
8+阅读 · 2018年12月10日
【NIPS2018】接收论文列表
专知
4+阅读 · 2018年9月10日
vae 相关论文 表示学习 2
CreateAMind
6+阅读 · 2018年9月9日
vae 相关论文 表示学习 1
CreateAMind
12+阅读 · 2018年9月6日
计算机视觉领域顶会CVPR 2018 接受论文列表
Auto-Encoding GAN
CreateAMind
5+阅读 · 2017年8月4日
Arxiv
22+阅读 · 2020年1月2日
Knowledge Distillation from Internal Representations
Arxiv
4+阅读 · 2019年10月8日
Knowledge Representation Learning: A Quantitative Review
Arxiv
7+阅读 · 2018年1月21日
小贴士
相关VIP内容
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
100+阅读 · 2020年3月18日
专知会员服务
106+阅读 · 2020年3月12日
17篇知识图谱Knowledge Graphs论文 @AAAI2020
专知会员服务
145+阅读 · 2020年2月13日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
66+阅读 · 2019年10月9日
知识图谱本体结构构建论文合集
专知会员服务
73+阅读 · 2019年10月9日
相关资讯
RL 真经
CreateAMind
4+阅读 · 2018年12月28日
Disentangled的假设的探讨
CreateAMind
8+阅读 · 2018年12月10日
【NIPS2018】接收论文列表
专知
4+阅读 · 2018年9月10日
vae 相关论文 表示学习 2
CreateAMind
6+阅读 · 2018年9月9日
vae 相关论文 表示学习 1
CreateAMind
12+阅读 · 2018年9月6日
计算机视觉领域顶会CVPR 2018 接受论文列表
Auto-Encoding GAN
CreateAMind
5+阅读 · 2017年8月4日
Top
微信扫码咨询专知VIP会员