Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time.We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we've also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.
翻译:生成模型(GM),如General Adversary Network (GAN) 和 VAE) 等生成模型(GMs), 这些年份已经蓬勃发展,在生成新样本方面取得了高质量的成果。 特别是在计算机愿景中, GMs 被用于图像油漆、分解和完成, 可以从观察到的像素推断到腐蚀的像素。 然而, 图像结构等级结构与许多具有非高度GM特征的真实世界推论情景有很大不同。 这些推论情景包含混杂的随机变异和不正常的相互依赖性。 它们传统上是由Bayesian 网络(BN) 模拟的。 但是, BN模型的学习和推论是硬的,因此BN的随机变异变数非常受限制。 在本文中,我们调整典型的GMS, 以便能够在多元的白色时间进行混杂的学习和推断。 我们还提出一个扩大的自我回归(EAR) 模型和带有对抗性损失的EAR模型(EAR) 的模型, 并给出关于其最大效果的理论结果。