We introduce \textbf{inverse modified differential equations} (IMDEs) to contribute to the fundamental theory of discovery of dynamics. In particular, we investigate the IMDEs for the neural ordinary differential equations (neural ODEs). Training such a learning model actually returns an approximation of an IMDE, rather than the original system. Thus, the convergence analysis for data-driven discovery is illuminated. The discrepancy of discovery depends on the order of the integrator used. Furthermore, IMDEs make clear the behavior of parameterizing some blocks in neural ODEs. We also perform several experiments to numerically substantiate our theoretical results.
翻译:我们引入了\ textbf{ 反向修改差异方程式} (IMDEs), 以促进发现动态的基本理论。 特别是, 我们调查神经普通差异方程式的IMDEs 。 培训这样的学习模式实际上返回了IMDE的近似值, 而不是原系统 。 因此, 数据驱动的发现的趋同分析被照亮了 。 发现的差异取决于所用集成器的顺序 。 此外, IMDEs 明确了神经质代码中某些区块的参数化行为 。 我们还进行了数项实验, 以数字形式证实我们的理论结果 。