We introduce a deep residual recurrent neural network (DR-RNN) as an efficient model reduction technique for nonlinear dynamical systems. The developed DR-RNN is inspired by the iterative steps of line search methods in finding the residual minimiser of numerically discretized differential equations. We formulate this iterative scheme as stacked recurrent neural network (RNN) embedded with the dynamical structure of the emulated differential equations. Numerical examples demonstrate that DR-RNN can effectively emulate the full order models of nonlinear physical systems with a significantly lower number of parameters in comparison to standard RNN architectures. Further, we combined DR-RNN with Proper Orthogonal Decomposition (POD) for model reduction of time dependent partial differential equations. The presented numerical results show the stability of proposed DR-RNN as an explicit reduced order technique. We also show significant gains in accuracy by increasing the depth of proposed DR-RNN similar to other applications of deep learning.
翻译:我们引入了一种深残的经常性神经网络(DR-RNN),作为非线性动态系统的一种有效的减少模型技术。开发的DR-RNNN受到在寻找数字离散差异方程式的剩余最小度方面线性搜索方法的迭代步骤的启发。我们把这个迭代计划作为堆叠的经常性神经网络(RNN),嵌入了模拟差异方程式的动态结构。数字实例表明,DR-RNN可以有效地效仿非线性物理系统的全面顺序模型,其参数数量比标准的RNN结构要低得多。此外,我们将DR-RNNN与适当的Orthogonal分解配置(POD)结合起来,用于减少时间依赖部分差异方程式的模型。我们提出的数字结果显示,拟议的DR-RNN是明显的减少顺序技术。我们还通过提高提议的DR-RNN的深度与其他深层次学习应用相比,在准确性方面也取得了显著的收益。