Collecting paired training data is difficult in practice, but the unpaired samples broadly exist. Current approaches aim at generating synthesized training data from the unpaired samples by exploring the relationship between the corrupted and clean data. This work proposes LUD-VAE, a deep generative method to learn the joint probability density function from data sampled from marginal distributions. Our approach is based on a carefully designed probabilistic graphical model in which the clean and corrupted data domains are conditionally independent. Using variational inference, we maximize the evidence lower bound (ELBO) to estimate the joint probability density function. Furthermore, we show that the ELBO is computable without paired samples under the inference invariant assumption. This property provides the mathematical rationale of our approach in the unpaired setting. Finally, we apply our method to real-world image denoising and super-resolution tasks and train the models using the synthetic data generated by the LUD-VAE. Experimental results validate the advantages of our method over other learnable approaches.
翻译:收集对齐培训数据在实践中是困难的,但未腐蚀的样本广泛存在。目前的做法是通过探索腐败和清洁数据之间的关系,从未腐蚀和清洁的样本中生成综合培训数据。这项工作提出LUD-VAE,这是从边际分布抽样数据中学习共同概率密度函数的深层基因化方法。我们的方法基于精心设计的概率化图形模型,其中清洁和腐败的数据领域有条件地独立。我们使用变式推断,最大限度地利用较低约束性的证据来估计联合概率值。此外,我们表明ELBO在推断变量假设中无需配对样本是可以进行可比较的。这一属性提供了我们在未腐蚀环境中采用的方法的数学原理。最后,我们用我们的方法来应用真实世界图像的消能和超分辨率任务,并利用LUD-VAE产生的合成数据来培训模型。实验结果证实了我们的方法优于其他可学习的方法。