Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned distribution given only samples from each distribution. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning. Most prior works use adversarial learning (i.e., min-max optimization), which can be challenging to optimize and evaluate. A few recent works explore non-adversarial flow-based (i.e., invertible) approaches, but they lack a unified perspective and are limited in efficiently aligning multiple distributions. Therefore, we propose to unify and generalize previous flow-based approaches under a single non-adversarial framework, which we prove is equivalent to minimizing an upper bound on the Jensen-Shannon Divergence (JSD). Importantly, our problem reduces to a min-min, i.e., cooperative, problem and can provide a natural evaluation metric for unsupervised distribution alignment. We present empirical results of our framework on both simulated and real-world datasets to demonstrate the benefits of our approach.
翻译:未经监督的分布协调估计了一种转换,它映射出两个或两个以上的源分布,以共享一致的分布分布,而每个分布只提供样本。这项任务有许多应用,包括基因模型、不受监督的域适应和有社会意识的学习。大多数先前的作品使用对抗性学习(即微量最大优化),这可能对优化和评价构成挑战。最近的一些作品探索了非对抗性流动(即不可置疑的)方法,但它们缺乏统一的观点,在有效协调多种分布方面受到限制。因此,我们提议在单一的非对抗性框架内统一和普及以往的基于流动的方法,我们证明这相当于最大限度地减少Jensen-Shannon divergence(JSD)的上限。重要的是,我们的问题被缩小到一个微量,即合作性、问题,可以提供不受监督的分布协调的自然评价指标。我们介绍了我们关于模拟和真实世界数据集的框架的经验结果,以证明我们的方法的好处。