Gromov Wasserstein (GW) distance is a powerful tool for comparing and aligning probability distributions supported on different metric spaces. It has become the main modeling technique for aligning heterogeneous data for a wide range of graph learning tasks. However, the GW distance is known to be highly sensitive to outliers, which can result in large inaccuracies if the outliers are given the same weight as other samples in the objective function. To mitigate this issue, we introduce a new and robust version of the GW distance called RGW. RGW features optimistically perturbed marginal constraints within a $\varphi$-divergence based ambiguity set. To make the benefits of RGW more accessible in practice, we develop a computationally efficient algorithm, Bregman proximal alternating linearization minimization, with a theoretical convergence guarantee. Through extensive experimentation, we validate our theoretical results and demonstrate the effectiveness of RGW on real-world graph learning tasks, such as subgraph matching and partial shape correspondence.
翻译:Gromov Vasserstein (GW) 距离是比较和调整不同计量空间所支持的概率分布的有力工具,它已成为为各种图解学习任务协调各种不同数据的主要模型技术。然而,据知GW距离对外部线非常敏感,如果外部线与其他样本在客观功能中具有同等的份量,这可能造成很大的不准确性。为了缓解这一问题,我们引入了一个新的和稳健的GW距离版本,称为RGW。 RGW在基于$/vorphy$-diverence 的模糊性设定中呈现出乐观地渗透的边际限制。为了使RGW的好处在实际中更容易获得,我们开发了一个计算效率高的算法,Bregman Proximal 交替线化最小化,并提供理论趋同保证。我们通过广泛的实验,验证了我们的理论结果,并展示了RGW在现实世界图形学习任务上的有效性,例如子图谱匹配和部分形状通信。