The kernel herding algorithm is used to construct quadrature rules in a reproducing kernel Hilbert space (RKHS). While the computational efficiency of the algorithm and stability of the output quadrature formulas are advantages of this method, the convergence speed of the integration error for a given number of nodes is slow compared to that of other quadrature methods. In this paper, we propose a modified kernel herding algorithm whose framework was introduced in a previous study and aim to obtain sparser solutions while preserving the advantages of standard kernel herding. In the proposed algorithm, the negative gradient is approximated by several vertex directions, and the current solution is updated by moving in the approximate descent direction in each iteration. We show that the convergence speed of the integration error is directly determined by the cosine of the angle between the negative gradient and approximate gradient. Based on this, we propose new gradient approximation algorithms and analyze them theoretically, including through convergence analysis. In numerical experiments, we confirm the effectiveness of the proposed algorithms in terms of sparsity of nodes and computational efficiency. Moreover, we provide a new theoretical analysis of the kernel quadrature rules with fully-corrective weights, which realizes faster convergence speeds than those of previous studies.
翻译:内核放牧算法用于在复制的内核 Hilbert 空间( RKHS) 中构建二次规则。 虽然计算算法的计算效率和输出象形公式的稳定性是这种方法的优势,但一个特定节点的整合错误的趋同速度与其他二次曲线方法相比是缓慢的。 在本文件中,我们提议了经修改的内核放牧算法,其框架是在先前的一项研究中引入的,目的是在保持标准内核放牧的好处的同时获得更稀释的解决方案。在拟议的算法中,负梯度被几个顶心方向所近似,而目前的解决方案则通过在每个迭代之间大致的下降方向加以更新。我们表明,与负梯度和近似梯度之间角的正弦值相比,整合错误的趋同速度是直接决定的。在此基础上,我们提出了新的梯度近似算法,并通过趋同分析等方式进行理论分析。在数字实验中,我们确认拟议的算法在节点和计算效率的宽度方面的有效性。此外,我们提供了比先前的更快速的趋同度分析,我们提供了比前几度的理论分析。