Graph Representation Learning (GRL) has been advancing at an unprecedented rate. However, many results rely on careful design and tuning of architectures, objectives, and training schemes. We propose efficient GRL methods that optimize convexified objectives with known closed form solutions. Guaranteed convergence to a global optimum releases practitioners from hyper-parameter and architecture tuning. Nevertheless, our proposed method achieves competitive or state-of-the-art performance on popular GRL tasks while providing orders of magnitude speedup. Although the design matrix ($\mathbf{M}$) of our objective is expensive to compute, we exploit results from random matrix theory to approximate solutions in linear time while avoiding an explicit calculation of $\mathbf{M}$. Our code is online: http://github.com/samihaija/tf-fsvd
翻译:图表教学(GRL)一直以前所未有的速度向前推进。然而,许多结果都依赖于对结构、目标和培训计划的仔细设计和调整。我们提出了高效的GRL方法,以已知的封闭形式解决方案优化整合目标;保证与超参数和结构调整产生的全球最佳排放从业者趋于一致。然而,我们提出的方法在普及的GRL任务上实现了竞争性或最先进的表现,同时提供了数量级加速。虽然我们的目标的设计矩阵($mathbf{M}$)计算成本很高,但我们利用随机矩阵理论的结果来在直线时间接近解决方案,同时避免明确计算$\mathbf{M}$。我们的代码在网上:http://github.com/samihaija/tf-fvd:http://githb{M}。我们的代码在网上:http://github.com/samihaija/tf-fvd。