Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned distribution given only samples from each distribution. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning. Most prior works use adversarial learning (i.e., min-max optimization), which can be challenging to optimize and evaluate. A few recent works explore non-adversarial flow-based (i.e., invertible) approaches, but they lack a unified perspective and are limited in efficiently aligning multiple distributions. Therefore, we propose to unify and generalize previous flow-based approaches under a single non-adversarial framework, which we prove is equivalent to minimizing an upper bound on the Jensen-Shannon Divergence (JSD). Importantly, our problem reduces to a min-min, i.e., cooperative, problem and can provide a natural evaluation metric for unsupervised distribution alignment. We show empirical results on both simulated and real-world datasets to demonstrate the benefits of our approach. Code is available at https://github.com/inouye-lab/alignment-upper-bound.
翻译:未经监督的分布协调估计了一种转换,它映射出两个或两个以上的源分布,而只有每个分布的样本才提供相同的分布。这项任务有许多应用,包括基因模型、不受监督的域适应和有社会意识的学习。大多数以前的工作都使用对抗性学习(即微量最大优化),这可能对优化和评价构成挑战。一些最近的作品探索了非对抗性流动(即不可置疑)方法,但它们缺乏统一的观点,在有效协调多种分布方面受到限制。因此,我们提议在单一的非对抗性框架内统一和普及先前的流基方法,这证明相当于最大限度地减少Jensen-Shannon divergence(JSD)的上限。重要的是,我们的问题被降为一个最小值,即合作性、问题和可以为不受监督的分布协调提供自然评价基准。我们在模拟和真实世界数据集上都展示了经验结果,以展示我们的方法的好处。代码可在 https://github.com/inou-lab/alignup-mary-syment上查阅。