Collecting paired training data is difficult in practice, but the unpaired samples broadly exist. Current approaches aim at generating synthesized training data from the unpaired samples by exploring the relationship between the corrupted and clean data. This work proposes LUD-VAE, a deep generative method to learn the joint probability density function from data sampled from marginal distributions. Our approach is based on a carefully designed probabilistic graphical model in which the clean and corrupted data domains are conditionally independent. Using variational inference, we maximize the evidence lower bound (ELBO) to estimate the joint probability density function. Furthermore, we show that the ELBO is computable without paired samples under the inference invariant assumption. This property provides the mathematical rationale of our approach in the unpaired setting. Finally, we apply our method to real-world image denoising, super-resolution, and low-light image enhancement tasks and train the models using the synthetic data generated by the LUD-VAE. Experimental results validate the advantages of our method over other approaches.
翻译:收集对齐培训数据在实践中是困难的,但未腐蚀的样本却广泛存在。目前的做法是通过探索腐败和清洁数据之间的关系,从未腐蚀和清洁的样本中生成综合培训数据。这项工作提出LUD-VAE,这是一种从边际分布抽样数据中学习共同概率密度功能的深层基因化方法。我们的方法基于一种精心设计的概率化图形模型,其中清洁和腐败的数据领域有条件地独立。我们使用不同的推论,尽量扩大证据下限(ELBO)来估计联合概率密度函数。此外,我们表明ELBO在推断变量假设中无需配对样本的情况下是可以比较的。这一属性提供了我们在未腐蚀环境中采用的方法的数学原理。最后,我们用我们的方法对真实世界图像的淡化、超分辨率和低光度图像增强任务进行了应用,并利用LUD-VAE产生的合成数据对模型进行了培训。实验结果验证了我们的方法相对于其他方法的优势。