SPN 图片可视化 代码 应用 model 等介绍

2019 年 6 月 25 日 CreateAMind


sum-product networks



http://proceedings.mlr.press/v97/tan19b/tan19b.pdf

Hierarchical Decompositional Mixtures of Variational Autoencoders


https://github.com/cambridge-mlg/SPVAE

We also observed in our experiments that SPVAEs allowed larger learning rates than VAEs (up to 0.1) during training. Investigating this effect is also subject for future research;

A current downside is that the decomposed nature of SPVAEs cause run-times approximately a factor 5 slower than for VAEs.

Abstract Variational autoencoders (VAEs) have received considerable attention, since they allow us to learn expressive neural density estimators effectively and efficiently. However, learning and inference in VAEs is still problematic due to the sensitive interplay between the generative model and the inference network. Since these problems become generally more severe in high dimensions, we propose a novel hierarchical mixture model over low-dimensional VAE experts. Our model decomposes the overall learning problem into many smaller problems, which are coordinated by the hierarchical mixture, represented by a sum-product network. In experiments we show that our models outperform classical VAEs on almost all of our experimental benchmarks. Moreover, we show that our model is highly data efficient and degrades very gracefully in extremely low data regimes.


The key observation used in this paper is that SPNs allow arbitrary representations for the leaves – also intractable ones like VAE distributions. The idea of this hybrid model class, which we denote as sum-product VAE (SPVAE), is to combine the best of both worlds: On the one hand, it extends the model capabilities of SPNs by using flexible VAEs as leaves; one the other hand, it applies the above mentioned divide-and-conquer approach to VAEs, which can be expected to lead to an easier learning problem. Since SPNs can be interpreted as structured latent variable models (Zhao et al., 2015; Peharz et al., 2017), the generative process of SPVAEs can be depicted as in Fig. 1 (solid lines).


5. Conclusion We presented SPVAEs, a novel structured model which combines a tractable model (SPNs) and an intractable model (VAEs) in a natural way. As shown in our experiments, this leads to i) better density estimates, ii) smaller models, and iii) improved data efficiency when compared to classical VAEs. Future work includes more extensive experiments with SPN structures and VAE variants. Furthermore, the decompositional structure of SPVAEs naturally lends itself towards distributed model learning and inference. We also observed in our experiments that SPVAEs allowed larger learning rates than VAEs (up to 0.1) during training. Investigating this effect is also subject for future research. A current downside is that the decomposed nature of SPVAEs cause run-times approximately a factor 5 slower than for VAEs. However, the execution time SPVAEs could be reduced by more elaborate designs, facilitating a higher degree of parallelism, for example using vectorization and distributed learning.






https://github.com/stelzner/supair   

Abstract

The recent Attend-Infer-Repeat (AIR) framework

marks a milestone in structured probabilistic modeling, as it tackles the challenging problem of

unsupervised scene understanding via Bayesian

inference. AIR expresses the composition of visual scenes from individual objects, and uses variational autoencoders to model the appearance of

those objects. However, inference in the overall

model is highly intractable, which hampers its

learning speed and makes it prone to suboptimal

solutions. In this paper, we show that the speed

and robustness of learning in AIR can be considerably improved by replacing the intractable

object representations with tractable probabilistic

models. In particular, we opt for sum-product

networks (SPNs), expressive deep probabilistic

models with a rich set of tractable inference routines. The resulting model, called SuPAIR, learns

an order of magnitude faster than AIR, treats object occlusions in a consistent manner, and allows

for the inclusion of a background noise model,

improving the robustness of Bayesian scene understanding.  

https://github.com/stelzner/supair   




http://www.mlmi.eng.cam.ac.uk/foswiki/pub/Main/ClassOf2018/thesis_PingLiangTan.pdf


“An SPN is a probabilistic model designed for efficient inference."





http://www.di.uniba.it/~ndm/pubs/vergari18mlj.pdf

Visualizing and understanding Sum-Product Networks


s (VAEs) [19] are generative autoencoders, but differently from MADEs they are tailored towards compressing and learning untangled representations of the data through a variational approach to Bayesian inference. While VAEs have recently gained momentum as generative models, their inference capabilities, contrary to SPNs, are limited and restricted to Monte Carlo estimates relying on the generated samples. W.r.t. all the above mentioned neural models, one can learn one SPN structure from data and obtain a highly versatile probabilistic model capable of performing a wide variety of inference queries efficiently and at the same time providing very informative feature representations, as we will see in the following sections.





awesome spn

https://github.com/arranger1044/awesome-spn#structure-learning

内置图网络?





登录查看更多
2

相关内容

ACM/IEEE第23届模型驱动工程语言和系统国际会议,是模型驱动软件和系统工程的首要会议系列,由ACM-SIGSOFT和IEEE-TCSE支持组织。自1998年以来,模型涵盖了建模的各个方面,从语言和方法到工具和应用程序。模特的参加者来自不同的背景,包括研究人员、学者、工程师和工业专业人士。MODELS 2019是一个论坛,参与者可以围绕建模和模型驱动的软件和系统交流前沿研究成果和创新实践经验。今年的版本将为建模社区提供进一步推进建模基础的机会,并在网络物理系统、嵌入式系统、社会技术系统、云计算、大数据、机器学习、安全、开源等新兴领域提出建模的创新应用以及可持续性。 官网链接:http://www.modelsconference.org/
深度强化学习策略梯度教程,53页ppt
专知会员服务
178+阅读 · 2020年2月1日
【新书】贝叶斯网络进展与新应用,附全书下载
专知会员服务
119+阅读 · 2019年12月9日
强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
【新书】Python编程基础,669页pdf
专知会员服务
193+阅读 · 2019年10月10日
[综述]深度学习下的场景文本检测与识别
专知会员服务
77+阅读 · 2019年10月10日
计算机视觉最佳实践、代码示例和相关文档
专知会员服务
17+阅读 · 2019年10月9日
TensorFlow 2.0 学习资源汇总
专知会员服务
66+阅读 · 2019年10月9日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
103+阅读 · 2019年10月9日
无监督元学习表示学习
CreateAMind
27+阅读 · 2019年1月4日
【SIGIR2018】五篇对抗训练文章
专知
12+阅读 · 2018年7月9日
基于LSTM-CNN组合模型的Twitter情感分析(附代码)
机器学习研究会
50+阅读 · 2018年2月21日
gan生成图像at 1024² 的 代码 论文
CreateAMind
4+阅读 · 2017年10月31日
可解释的CNN
CreateAMind
17+阅读 · 2017年10月5日
【论文】图上的表示学习综述
机器学习研究会
14+阅读 · 2017年9月24日
最佳实践:深度学习用于自然语言处理(三)
待字闺中
3+阅读 · 2017年8月20日
A Probe into Understanding GAN and VAE models
Arxiv
9+阅读 · 2018年12月13日
Arxiv
19+阅读 · 2018年10月25日
Arxiv
4+阅读 · 2018年4月26日
Arxiv
6+阅读 · 2018年2月24日
VIP会员
相关VIP内容
深度强化学习策略梯度教程,53页ppt
专知会员服务
178+阅读 · 2020年2月1日
【新书】贝叶斯网络进展与新应用,附全书下载
专知会员服务
119+阅读 · 2019年12月9日
强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
【新书】Python编程基础,669页pdf
专知会员服务
193+阅读 · 2019年10月10日
[综述]深度学习下的场景文本检测与识别
专知会员服务
77+阅读 · 2019年10月10日
计算机视觉最佳实践、代码示例和相关文档
专知会员服务
17+阅读 · 2019年10月9日
TensorFlow 2.0 学习资源汇总
专知会员服务
66+阅读 · 2019年10月9日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
103+阅读 · 2019年10月9日
相关资讯
无监督元学习表示学习
CreateAMind
27+阅读 · 2019年1月4日
【SIGIR2018】五篇对抗训练文章
专知
12+阅读 · 2018年7月9日
基于LSTM-CNN组合模型的Twitter情感分析(附代码)
机器学习研究会
50+阅读 · 2018年2月21日
gan生成图像at 1024² 的 代码 论文
CreateAMind
4+阅读 · 2017年10月31日
可解释的CNN
CreateAMind
17+阅读 · 2017年10月5日
【论文】图上的表示学习综述
机器学习研究会
14+阅读 · 2017年9月24日
最佳实践:深度学习用于自然语言处理(三)
待字闺中
3+阅读 · 2017年8月20日
Top
微信扫码咨询专知VIP会员