In many scientific disciplines, we are interested in inferring the nonlinear dynamical system underlying a set of observed time series, a challenging task in the face of chaotic behavior and noise. Previous deep learning approaches toward this goal often suffered from a lack of interpretability and tractability. In particular, the high-dimensional latent spaces often required for a faithful embedding, even when the underlying dynamics lives on a lower-dimensional manifold, can hamper theoretical analysis. Motivated by the emerging principles of dendritic computation, we augment a dynamically interpretable and mathematically tractable piecewise-linear (PL) recurrent neural network (RNN) by a linear spline basis expansion. We show that this approach retains all the theoretically appealing properties of the simple PLRNN, yet boosts its capacity for approximating arbitrary nonlinear dynamical systems in comparatively low dimensions. We employ two frameworks for training the system, one combining back-propagation-through-time (BPTT) with teacher forcing, and another based on fast and scalable variational inference. We show that the dendritically expanded PLRNN achieves better reconstructions with fewer parameters and dimensions on various dynamical systems benchmarks and compares favorably to other methods, while retaining a tractable and interpretable structure.
翻译:在许多科学学科中,我们有兴趣推断出非线性动态系统是一组观察到的时间序列的基础,这是在混乱行为和噪音面前一项具有挑战性的任务。以前为实现这一目标采取的深层次学习方法往往缺乏可解释性和可移动性。特别是,即使深层动态存在于一个低维的多元体上,忠实嵌入所需的高维潜在空间往往会妨碍理论分析。受登地计算新兴原则的驱动,我们通过线性样基扩展,以动态解释和数学可移动的PPWY-线性经常性神经网络(RNN)为动力。我们表明,这一方法保留了简单的PLRNN的所有理论吸引力特性,但提高了其以相对低的维度接近任意非线性动态系统的能力。我们采用了两个系统培训框架,一个框架将反向调整-通时(BPBTTT)与教师的强迫力结合起来,另一个框架是基于快速和可扩展的可移动的线性线性线性神经网络(RNNN) 。我们显示,这一方法在精确的线性扩展PLNNNN所保持的理论上的所有吸引力属性属性属性特性特性特性特性特性特性特性特性特性特性特性特性特性特性特性特性特性特性特性,而同时以较佳地保持其他动态的参数和可更精确地进行不同的结构,同时以较细化的尺寸对各种尺寸的尺寸进行更精确地解释。