Stochastic graph neural networks (SGNNs) are information processing architectures that learn representations from data over random graphs. SGNNs are trained with respect to the expected performance, which comes with no guarantee about deviations of particular output realizations around the optimal expectation. To overcome this issue, we propose a variance-constrained optimization problem for SGNNs, balancing the expected performance and the stochastic deviation. An alternating primal-dual learning procedure is undertaken that solves the problem by updating the SGNN parameters with gradient descent and the dual variable with gradient ascent. To characterize the explicit effect of the variance-constrained learning, we conduct a theoretical analysis on the variance of the SGNN output and identify a trade-off between the stochastic robustness and the discrimination power. We further analyze the duality gap of the variance-constrained optimization problem and the converging behavior of the primal-dual learning procedure. The former indicates the optimality loss induced by the dual transformation and the latter characterizes the limiting error of the iterative algorithm, both of which guarantee the performance of the variance-constrained learning. Through numerical simulations, we corroborate our theoretical findings and observe a strong expected performance with a controllable standard deviation.
翻译:为了克服这一问题,我们为SGNN提出一个差异性优化问题,平衡预期性能和随机性偏差; 采用一种交替的原始-双重学习程序,用梯度下降更新SGNN参数来解决这个问题,而后者则以梯度上升为主; 为说明差异性学习的明显效果,我们对SGNN产出的差异进行理论分析,并找出差异性强力与歧视力量之间的权衡; 我们进一步分析差异性强力优化问题的双重性差距和原始-双重学习程序的趋同行为; 我们进一步分析差异性强的优化问题与原始-双重学习程序的双重性差距; 前者表明双重变迁引起的最佳性损失,后者说明迭代算法的有限误差,两者都保证差异性差性强的学习; 我们通过数字性模拟,用预期的偏差来验证我们的数字性标准。