Initial work on variational autoencoders assumed independent latent variables with simple distributions. Subsequent work has explored incorporating more complex distributions and dependency structures: including normalizing flows in the encoder network allows latent variables to entangle non-linearly, creating a richer class of distributions for the approximate posterior, and stacking layers of latent variables allows more complex priors to be specified for the generative model. This work explores incorporating arbitrary dependency structures, as specified by Bayesian networks, into VAEs. This is achieved by extending both the prior and inference network with graphical residual flows - residual flows that encode conditional independence by masking the weight matrices of the flow's residual blocks. We compare our model's performance on several synthetic datasets and show its potential in data-sparse settings.
翻译:关于变式自动编码器的初始工作假设了独立的潜在变量,并有简单的分布。随后的工作探索了将更复杂的分布和依赖结构纳入其中:包括编码器网络的正常化流动使潜在变量能够不线性地缠绕在一起,为近似后方变量创造更丰富的分配类别,而潜在变量堆叠层则使得为基因化模型指定更复杂的前科。这项工作探索了将巴耶斯网络具体规定的任意依赖结构纳入VAEs的情况。这是通过扩大图形剩余流动的先前和推断网络实现的,即通过遮盖流动残余区块的重量矩阵来记录有条件独立的剩余流动。我们比较了我们模型在若干合成数据集上的性能,并展示其在数据扭曲环境中的潜力。