The paper contributes to strengthening the relation between machine learning and the theory of differential equations. In this context, the inverse problem of fitting the parameters, and the initial condition of a differential equation to some measurements constitutes a key issue. The paper explores an abstraction that can be used to construct a family of loss functions with the aim of fitting the solution of an initial value problem to a set of discrete or continuous measurements. It is shown, that an extension of the adjoint equation can be used to derive the gradient of the loss function as a continuous analogue of backpropagation in machine learning. Numerical evidence is presented that under reasonably controlled circumstances the gradients obtained this way can be used in a gradient descent to fit the solution of an initial value problem to a set of continuous noisy measurements, and a set of discrete noisy measurements that are recorded at uncertain times.
翻译:本文有助于加强机器学习与差异方程式理论之间的关系。 在这方面,将参数和差异方程式的初始条件与某些测量相匹配的逆向问题是一个关键问题。本文探讨了可以用来构建损失函数大家庭的抽象概念,目的是将初始价值问题的解决办法与一套离散或连续的测量方法相匹配。文件表明,可使用双向方程式的延伸来得出损失函数的梯度,作为机器学习中回溯性分析的连续模拟。 数字证据表明,在合理控制的环境下,获得的梯度可被用于梯度下降,以适应最初价值问题的解决方案,即一套连续的扰动测量方法,以及一套在不确定时期记录的离散噪音测量方法。