This paper proposes the recursive and square-root BLS algorithms to improve the original BLS for new added inputs, which utilize the inverse and inverse Cholesky factor of the Hermitian matrix in the ridge inverse, respectively, to update the ridge solution. The recursive BLS updates the inverse by the matrix inversion lemma, while the square-root BLS updates the upper-triangular inverse Cholesky factor by multiplying it with an upper-triangular intermediate matrix. When the added p training samples are more than the total k nodes in the network, i.e., p > k, the inverse of a sum of matrices is applied to take a smaller matrix inversion or inverse Cholesky factorization. The original BLS based on the generalized inverse with the ridge regression assumes the ridge parameter {\lambda}->0 in the ridge inverse. When {\lambda}->0 is not satisfied, the numerical experiments on the MNIST and NORB datasets show that both the proposed ridge solutions improve the testing accuracy of the original BLS, and the improvement becomes more significant as {\lambda} is bigger. On the other hand, compared to the original BLS, both the proposed BLS algorithms theoretically require less complexities, and are significantly faster in the simulations on the MNIST dataset. The speedups in total training time of the recursive and square-root BLS algorithms over the original BLS are 4.41 and 6.92 respectively when p > k, and are 2.80 and 1.59 respectively when p < k.
翻译:本文建议重现和平方根 BLS 算法, 以改进新添加的输入的原始 BLS, 后者分别使用脊脊脊中Hermitian 矩阵的反反反反反反反反Cholesky 系数来更新脊脊溶液。 循环 BLS 更新矩阵反向的列姆马, 而平方根 BLS 则以高三角反向Cholesky 系数, 与上三角中间矩阵相乘。 当添加的 PLS 样本超过网络中的总 k节点时, 即 p > k, 则使用矩阵总和的反反反向系数来更新 。 当最初的 LSLS 和 NORB 的递增速度时, 最初的KLS 和 程的数值实验结果会大大改进 。 在原始的 BLS 中, 最初的 和 原始的 BLS 的 的 时间 将比 B 更高级的 。