We investigate the convergence and stability properties of the decoupled extended Kalman filter learning algorithm (DEKF) within the long-short term memory network (LSTM) based online learning framework. For this purpose, we model DEKF as a perturbed extended Kalman filter and derive sufficient conditions for its stability during LSTM training. We show that if the perturbations -- introduced due to decoupling -- stay bounded, DEKF learns LSTM parameters with similar convergence and stability properties of the global extended Kalman filter learning algorithm. We verify our results with several numerical simulations and compare DEKF with other LSTM training methods. In our simulations, we also observe that the well-known hyper-parameter selection approaches used for DEKF in the literature satisfy our conditions.
翻译:我们研究了长期短期内存网络(LSTM)在线学习框架内解开的卡尔曼延伸过滤算法(DEKF)的趋同性和稳定性特性,为此,我们将DEKF模拟为受干扰的卡尔曼延伸过滤器,并在LSTM培训期间为其稳定性创造足够的条件。我们表明,如果由于脱钩而引入的扰动 -- -- 保持交界,DEKF学习LSTM参数,与全球卡尔曼扩展过滤算法类似的趋同性和稳定性特性。我们通过数个数字模拟来核查我们的结果,并将DEKF与其他LSTM培训方法进行比较。在模拟中,我们还注意到,在文献中用于DEKF的众所周知的超参数选择方法符合我们的条件。