The combination of ordinary differential equations and neural networks, i.e., neural ordinary differential equations (Neural ODE), has been widely studied from various angles. However, deciphering the numerical integration in Neural ODE is still an open challenge, as many researches demonstrated that numerical integration significantly affects the performance of the model. In this paper, we propose the inverse modified differential equations (IMDE) to clarify the influence of numerical integration on training Neural ODE models. IMDE is determined by the learning task and the employed ODE solver. It is shown that training a Neural ODE model actually returns a close approximation of the IMDE, rather than the true ODE. With the help of IMDE, we deduce that (i) the discrepancy between the learned model and the true ODE is bounded by the sum of discretization error and learning loss; (ii) Neural ODE using non-symplectic numerical integration fail to learn conservation laws theoretically. Several experiments are performed to numerically verify our theoretical analysis.
翻译:普通差异方程式和神经网络的组合,即神经普通差异方程式(Neural Complications,Neal Complications,Neal Complications,Neal Complications,Neal CODE),已经从不同的角度进行了广泛的研究,然而,在神经代码中解码数字整合仍然是一个公开的挑战,因为许多研究表明数字整合会大大影响模型的性能。在本文中,我们建议反向修改差异方程式(IMDE),以澄清数字整合对培训神经代码模型的影响。IMDE是由学习任务和被雇用的 ODE 解码器决定的。事实证明,培训神经代码模型实际上可以追溯到IMDE的近近近近,而不是真正的ODE。在IMDE的帮助下,我们推断:(一) 所学模型与真实的ODE之间的差异与离散错误和学习损失的总和有关;(二) 使用非偏向数字整合的Neural DEDED,在理论上学习保护法上无法学习。我们进行了一些实验,以数字方式验证我们的理论分析。