Neural ordinary differential equations (Neural ODEs) are an effective framework for learning dynamical systems from irregularly sampled time series data. These models provide a continuous-time latent representation of the underlying dynamical system where new observations at arbitrary time points can be used to update the latent representation of the dynamical system. Existing parameterizations for the dynamics functions of Neural ODEs limit the ability of the model to retain global information about the time series; specifically, a piece-wise integration of the latent process between observations can result in a loss of memory on the dynamic patterns of previously observed data points. We propose PolyODE, a Neural ODE that models the latent continuous-time process as a projection onto a basis of orthogonal polynomials. This formulation enforces long-range memory and preserves a global representation of the underlying dynamical system. Our construction is backed by favourable theoretical guarantees and in a series of experiments, we demonstrate that it outperforms previous works in the reconstruction of past and future data, and in downstream prediction tasks.
翻译:神经普通差异方程式(Neal odes)是从非正常抽样的时间序列数据中学习动态系统的有效框架。这些模型为基础动态系统提供了一个连续时间潜在代表,在任意的时间点上可以使用新的观测来更新动态系统的潜在代表。神经普通差异方程式(Neal odes)现有的动态功能参数限制了模型保留时间序列全球信息的能力;具体地说,观测之间的潜在过程的零星整合可能导致先前观察到的数据点动态模式的记忆丧失。我们提议了多动体模型,即将潜在连续时间过程作为预测的模型,用于在正反多动多动性多动性模型的基础上进行预测。这种公式可以强化长期记忆,并保护基本动态系统的全球代表性。我们的构建得到了有利的理论保障,并在一系列实验中得到了支持,我们证明它超越了过去和今后数据重建以及下游预测任务中以往的工作。</s>