Artificial Intelligence and Statistics(AISTATS 2020)将于2020年6月3日星期三至2020年6月5日星期五在意大利西西里巴勒莫举行。自1985年成立以来,AISTATS一直是人工智能、机器学习、统计和相关领域交叉研究人员的跨学科聚集地。官网链接:

最新论文

A large part of the literature on learning disentangled representations focuses on variational autoencoders (VAE). Recent developments demonstrate that disentanglement cannot be obtained in a fully unsupervised setting without inductive biases on models and data. However, Khemakhem et al., AISTATS, 2020 suggest that employing a particular form of factorized prior, conditionally dependent on auxiliary variables complementing input observations, can be one such bias, resulting in an identifiable model with guarantees on disentanglement. Working along this line, we propose a novel VAE-based generative model with theoretical guarantees on identifiability. We obtain our conditional prior over the latents by learning an optimal representation, which imposes an additional strength on their regularization. We also extend our method to semi-supervised settings. Experimental results indicate superior performance with respect to state-of-the-art approaches, according to several established metrics proposed in the literature on disentanglement.

0
0
下载
预览
Top