In classical frameworks as the Euclidean space, positive definite kernels as well as their analytic properties are explicitly available and can be incorporated directly in kernel-based learning algorithms. This is different if the underlying domain is a discrete irregular graph. In this case, respective kernels have to be computed in a preliminary step in order to apply them inside a kernel machine. Typically, such a kernel is given as a matrix function of the graph Laplacian. Its direct calculation leads to a high computational burden if the size of the graph is very large. In this work, we investigate five different block Krylov subspace methods to obtain cheaper iterative approximations of these kernels. We will investigate convergence properties of these Krylov subspace methods and study to what extent these methods are able to preserve the symmetry and positive definiteness of the original kernels they are approximating. We will further discuss the computational complexity and the memory requirements of these methods, as well as possible implications for the kernel predictors in machine learning.
翻译:在欧几里德空间等古典框架中,正确定内核及其分析特性明确可用,可直接纳入内核学习算法。如果基础领域是一个离散的不规则图,则不同。在这种情况下,必须首先计算各自的内核,以便将其应用到内核机内。通常,这种内核是Lapalcian图的矩阵函数。如果图表大小很大,直接计算会导致很高的计算负担。在这项工作中,我们调查了五个不同的块Krylov子空间方法,以获得这些内核较便宜的迭接近。我们将调查这些克里洛夫次空间方法的趋同特性,并研究这些方法在多大程度上能够保持它们与原始内核相匹配的对称性和确定性。我们将进一步讨论这些方法的计算复杂性和记忆要求,以及这些方法对机器学习的内核预测器可能产生的影响。