We consider the problem of inferring the graph structure from a given set of smooth graph signals. The number of perceived graph signals is always finite and possibly noisy, thus the statistical properties of the data distribution is ambiguous. Traditional graph learning models do not take this distributional uncertainty into account, thus performance may be sensitive to different sets of data. In this paper, we propose a distributionally robust approach to graph learning, which incorporates the first and second moment uncertainty into the smooth graph learning model. Specifically, we cast our graph learning model as a minimax optimization problem, and further reformulate it as a nonconvex minimization problem with linear constraints. In our proposed formulation, we find a theoretical interpretation of the Laplacian regularizer, which is adopted in many existing works in an intuitive manner. Although the first moment uncertainty leads to an annoying square root term in the objective function, we prove that it enjoys the smoothness property with probability 1 over the entire constraint. We develop a efficient projected gradient descent (PGD) method and establish its global iterate convergence to a critical point. We conduct extensive experiments on both synthetic and real data to verify the effectiveness of our model and the efficiency of the PGD algorithm. Compared with the state-of-the-art smooth graph learning methods, our approach exhibits superior and more robust performance across different populations of signals in terms of various evaluation metrics.
翻译:我们从一组平滑的图形信号中推断出图表结构的问题。 想象的图形信号数量总是有限,而且可能非常吵闹, 因此数据分布的统计特性是模糊的。 传统的图形学习模型没有考虑到这种分布不确定性, 因而对不同的数据集可能很敏感。 在本文中, 我们提出一种分布式强的图形学习方法, 将第一和第二时刻的不确定性纳入平滑的图形学习模型。 具体地说, 我们将我们的图形学习模型作为一个小缩微缩微缩增压问题, 并进一步将其重新组合成一个具有线性制约的非电离子最小化问题。 在我们提议的公式中, 我们发现对拉普拉西亚正规化器的理论解释是模糊的。 在许多现有的工作中,我们以直观的方式采用了这种理论解释。 尽管第一次的不确定性导致客观功能中一个令人讨厌的平方根术语, 我们证明它拥有平稳的属性, 在整个制约中概率为1。 我们开发了一种高效的预测梯度下降方法, 并将其全球的梯度融合到一个临界点。 我们在综合和真实的数据上都进行了广泛的实验, 以核实我们模型和平稳的模型的精确的模型的模型的进度, 以及各种图表的进度分析方法的效率。