Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer. However, unfolding a proximal splitting algorithm with a positive semi-definite (PSD) cone projection operator per iteration is expensive, due to the required full matrix eigen-decomposition. In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph classifier, where the PSD cone constraint is replaced by a set of "tightest possible" linear constraints per iteration. As a result, each iteration only requires computing a linear program (LP) and one extreme eigenvector. Inside the unrolled network, we optimize parameters via stochastic gradient descent (SGD) that determine graph edge weights in two ways: i) a metric matrix that computes feature distances, and ii) a sparse weight matrix computed via local linear embedding (LLE). Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
翻译:ALgorithm 正在演化, 通过将基于模型的算法作为神经层进行每一次迭代, 从而创建了可解释的、 模糊的神经网络结构。 然而, 展开一种近似分解算法, 每迭代都有一套正半无限制的半无限制线( PSD) 直线投投影操作员, 费用昂贵, 原因是需要完全矩阵 eigen 分解 。 在本文中, 利用名为 Gershgorin 磁盘完美匹配( GDPA) 的最新线性代代代代代代代数, 我们为二进制图解分类器的半无预测值编程松动算法( SDR ), 将二进制图解的半无预测值编程缩放( SDR), 由一套“ 最接近可能的” 线性线性线性( LP) 和 极精度( Exprestrain graduction) 矩阵, 通过本地的直线性数据网络来计算。