We study the meta-learning of numerical algorithms for scientific computing, which combines the mathematically driven, handcrafted design of general algorithm structure with a data-driven adaptation to specific classes of tasks. This represents a departure from the classical approaches in numerical analysis, which typically do not feature such learning-based adaptations. As a case study, we develop a machine learning approach that automatically learns effective solvers for initial value problems in the form of ordinary differential equations (ODEs), based on the Runge-Kutta (RK) integrator architecture. By combining neural network approximations and meta-learning, we show that we can obtain high-order integrators for targeted families of differential equations without the need for computing integrator coefficients by hand. Moreover, we demonstrate that in certain cases we can obtain superior performance to classical RK methods. This can be attributed to certain properties of the ODE families being identified and exploited by the approach. Overall, this work demonstrates an effective, learning-based approach to the design of algorithms for the numerical solution of differential equations, an approach that can be readily extended to other numerical tasks.
翻译:我们研究了科学计算数字算法的元学习,这种算法将数学驱动的手工设计的一般算法结构设计与数据驱动的适应特定任务类别结合起来。这与典型的数字分析方法不同,通常不以学习为基础的适应为特点。作为案例研究,我们开发了机器学习方法,根据龙格-库塔(Runge-Kutta)集成体结构,自动学习以普通差异方程式形式处理初始价值问题的有效解决者。我们通过将神经网络近似和元学习相结合,表明我们可以为不同方程式的目标家庭获得高顺序的集成器,而不需要手工计算综合系数。此外,我们证明在某些情况下,我们可以获得传统RK方法的优异性表现。这可以归因于正在被这种方法确定和利用的脱异方方方程式家庭的某些特性。总体而言,这项工作表明在设计差异方程式的数字解决方案的算法方面有一种有效的、基于学习的方法,这一方法可以很容易推广到其他数字任务。