In this research a novel stochastic gradient descent based learning approach for the radial basis function neural networks (RBFNN) is proposed. The proposed method is based on the q-gradient which is also known as Jackson derivative. In contrast to the conventional gradient, which finds the tangent, the q-gradient finds the secant of the function and takes larger steps towards the optimal solution. The proposed $q$-RBFNN is analyzed for its convergence performance in the context of least square algorithm. In particular, a closed form expression of the Wiener solution is obtained, and stability bounds of the learning rate (step-size) is derived. The analytical results are validated through computer simulation. Additionally, we propose an adaptive technique for the time-varying $q$-parameter to improve convergence speed with no trade-offs in the steady state performance.
翻译:在此研究中,提出了基于辐射基函数神经网络(RBFNN)的新颖的基于梯度梯度的基于梯度的学习方法。提议的方法以Q梯度为基础,也称为杰克逊派衍生物。与传统的梯度相比,q梯度发现正切值,而q梯度发现函数的偏差,并朝着最佳解决方案迈出更大的步骤。在最小平方算法的背景下,对拟议的$-RBFN的趋同性表现进行了分析。特别是,获得了维纳解决方案的封闭形式表达,并得出了学习率(逐步规模)的稳定性界限。分析结果通过计算机模拟验证。此外,我们提议了一种适应技术,用于时间变换的美元参数,以提高趋同速度,而在稳定状态性表现中不取舍。