Disentangled的假设的探讨

2018 年 12 月 10 日 CreateAMind

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

(Submitted on 29 Nov 2018 (v1), last revised 2 Dec 2018 (this version, v2))

In recent years, the interest in unsupervised learning of disentangled representations has significantly increased. The key assumption is that real-world data is generated by a few explanatory factors of variation and that these factors can be recovered by unsupervised learning algorithms. A large number of unsupervised learning approaches based on auto-encoding and quantitative evaluation metrics of disentanglement have been proposed; yet, the efficacy of the proposed approaches and utility of proposed notions of disentanglement has not been challenged in prior work. In this paper, we provide a sober look on recent progress in the field and challenge some common assumptions. 
We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12000 models covering the six most prominent methods, and evaluate them across six disentanglement metrics in a reproducible large-scale experimental study on seven different data sets. On the positive side, we observe that different methods successfully enforce properties "encouraged" by the corresponding losses. On the negative side, we observe in our study that well-disentangled models seemingly cannot be identified without access to ground-truth labels even if we are allowed to transfer hyperparameters across data sets. Furthermore, increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks. 
These results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets




Our contributions. The original motivation of this work was to provide a neutral large-scale study that benchmarks different unsupervised disentanglement methods and metrics on a wide set of data sets in a fair, reproducible experimental set up. However, the empirical evidence led us to instead challenge many commonly held assumptions in this field. Our key contributions can be summarized as follows: • We theoretically prove that (perhaps unsurprisingly) the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases both on the considered learning approaches and the data sets. • We investigate current approaches and their inductive biases in a reproducible1 large-scale experimental study with a sound experimental protocol for unsupervised disentanglement learning. We implement from scratch six recent unsupervised disentanglement learning methods as well as six disentanglement measures and train more than 12 000 models on seven data sets. • We evaluate our experimental results and challenge many common assumptions in unsupervised disentanglement learning: (i) While all considered methods prove effective at ensuring that the individual dimensions of the aggregated posterior (which is sampled) are not correlated, only one method also consistently ensures that the individual dimensions of the representation (which is taken to be the mean) are not correlated. (ii) We do not find any evidence that the considered models can be used to reliably learn disentangled representations in an unsupervised manner as random seeds and hyperparameters seem to matter more than the model choice. Furthermore, good trained models seemingly cannot be identified without access to ground-truth labels even if we are allowed to transfer good hyperparameter values across data sets. (iii) For the considered models and data sets, we cannot validate the assumption that disentanglement is useful for downstream tasks, for example through a decreased sample complexity of learning. • Based on these empirical evidence, we suggest three critical areas of further research: (i) The role of inductive biases and implicit and explicit supervision should be made explicit: unsupervised model selection persists as a key question. (ii) The concrete practical benefits of enforcing a specific notion of disentanglement of the learned representations should be demonstrated. (iii) Experiments should be conducted in a reproducible experimental setup on data sets of varying degrees of difficulty.





登录查看更多
9

相关内容

现实生活中常常会有这样的问题:缺乏足够的先验知识,因此难以人工标注类别或进行人工类别标注的成本太高。很自然地,我们希望计算机能代我们完成这些工作,或至少提供一些帮助。根据类别未知(没有被标记)的训练样本解决模式识别中的各种问题,称之为无监督学习
因果图,Causal Graphs,52页ppt
专知会员服务
246+阅读 · 2020年4月19日
深度强化学习策略梯度教程,53页ppt
专知会员服务
178+阅读 · 2020年2月1日
强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
2019年机器学习框架回顾
专知会员服务
35+阅读 · 2019年10月11日
【新书】Python编程基础,669页pdf
专知会员服务
193+阅读 · 2019年10月10日
[综述]深度学习下的场景文本检测与识别
专知会员服务
77+阅读 · 2019年10月10日
机器学习入门的经验与建议
专知会员服务
92+阅读 · 2019年10月10日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
103+阅读 · 2019年10月9日
知识图谱本体结构构建论文合集
专知会员服务
106+阅读 · 2019年10月9日
人工智能 | SCI期刊专刊信息3条
Call4Papers
5+阅读 · 2019年1月10日
无监督元学习表示学习
CreateAMind
27+阅读 · 2019年1月4日
人工智能 | PRICAI 2019等国际会议信息9条
Call4Papers
6+阅读 · 2018年12月13日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
vae 相关论文 表示学习 2
CreateAMind
6+阅读 · 2018年9月9日
vae 相关论文 表示学习 1
CreateAMind
12+阅读 · 2018年9月6日
条件GAN重大改进!cGANs with Projection Discriminator
CreateAMind
8+阅读 · 2018年2月7日
Adversarial Variational Bayes: Unifying VAE and GAN 代码
CreateAMind
7+阅读 · 2017年10月4日
Auto-Encoding GAN
CreateAMind
7+阅读 · 2017年8月4日
A Survey of Deep Learning for Scientific Discovery
Arxiv
29+阅读 · 2020年3月26日
Knowledge Distillation from Internal Representations
Arxiv
4+阅读 · 2019年10月8日
Arxiv
6+阅读 · 2018年11月29日
Arxiv
4+阅读 · 2018年4月10日
Arxiv
7+阅读 · 2018年1月21日
VIP会员
相关VIP内容
因果图,Causal Graphs,52页ppt
专知会员服务
246+阅读 · 2020年4月19日
深度强化学习策略梯度教程,53页ppt
专知会员服务
178+阅读 · 2020年2月1日
强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
2019年机器学习框架回顾
专知会员服务
35+阅读 · 2019年10月11日
【新书】Python编程基础,669页pdf
专知会员服务
193+阅读 · 2019年10月10日
[综述]深度学习下的场景文本检测与识别
专知会员服务
77+阅读 · 2019年10月10日
机器学习入门的经验与建议
专知会员服务
92+阅读 · 2019年10月10日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
103+阅读 · 2019年10月9日
知识图谱本体结构构建论文合集
专知会员服务
106+阅读 · 2019年10月9日
相关资讯
人工智能 | SCI期刊专刊信息3条
Call4Papers
5+阅读 · 2019年1月10日
无监督元学习表示学习
CreateAMind
27+阅读 · 2019年1月4日
人工智能 | PRICAI 2019等国际会议信息9条
Call4Papers
6+阅读 · 2018年12月13日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
vae 相关论文 表示学习 2
CreateAMind
6+阅读 · 2018年9月9日
vae 相关论文 表示学习 1
CreateAMind
12+阅读 · 2018年9月6日
条件GAN重大改进!cGANs with Projection Discriminator
CreateAMind
8+阅读 · 2018年2月7日
Adversarial Variational Bayes: Unifying VAE and GAN 代码
CreateAMind
7+阅读 · 2017年10月4日
Auto-Encoding GAN
CreateAMind
7+阅读 · 2017年8月4日
Top
微信扫码咨询专知VIP会员