Many problems in science and engineering require the efficient numerical approximation of integrals, a particularly important application being the numerical solution of initial value problems for differential equations. For complex systems, an equidistant discretization is often inadvisable, as it either results in prohibitively large errors or computational effort. To this end, adaptive schemes have been developed that rely on error estimators based on Taylor series expansions. While these estimators a) rely on strong smoothness assumptions and b) may still result in erroneous steps for complex systems (and thus require step rejection mechanisms), we here propose a data-driven time stepping scheme based on machine learning, and more specifically on reinforcement learning (RL) and meta-learning. First, one or several (in the case of non-smooth or hybrid systems) base learners are trained using RL. Then, a meta-learner is trained which (depending on the system state) selects the base learner that appears to be optimal for the current situation. Several examples including both smooth and non-smooth problems demonstrate the superior performance of our approach over state-of-the-art numerical schemes. The code is available under https://github.com/lueckem/quadrature-ML.
翻译:科学和工程方面的许多问题要求综合体的高效数字近似,其中一项特别重要的应用是,对差异方程的初始价值问题采用数字性解决办法。对于复杂的系统,偏僻的离散往往不可取,因为它造成令人望而却步的大错误或计算努力。为此,已经制定了依赖基于泰勒系列扩展的误差测算器的适应计划。虽然这些测算器a)依赖于高度平稳的假设,b)仍然可能导致复杂的系统出现错误的步骤(因而需要一步拒绝机制),我们在此提出一个数据驱动的时间制计划,以机器学习为基础,更具体地说,以强化学习(RL)和元学习为基础。首先,一个或数个(非移动或混合系统)基础学习者利用RL接受培训。然后,一个元分解器(取决于系统状况)选择了似乎适合当前情况的基础学习器。几个例子,包括光滑和非模式问题,表明我们的方法优于st-for-art/qualmal计划。