The growing body of research shows how to replace classical partial differential equation (PDE) integrators with neural networks. The popular strategy is to generate the input-output pairs with a PDE solver, train the neural network in the regression setting, and use the trained model as a cheap surrogate for the solver. The bottleneck in this scheme is the number of expensive queries of a PDE solver needed to generate the dataset. To alleviate the problem, we propose a computationally cheap augmentation strategy based on general covariance and simple random coordinate transformations. Our approach relies on the fact that physical laws are independent of the coordinate choice, so the change in the coordinate system preserves the type of a parametric PDE and only changes PDE's data (e.g., initial conditions, diffusion coefficient). For tried neural networks and partial differential equations, proposed augmentation improves test error by 23% on average. The worst observed result is a 17% increase in test error for multilayer perceptron, and the best case is a 80% decrease for dilated residual network.
翻译:越来越多的研究显示如何用神经网络取代经典的局部差异方程式( PDE) 集成器。 流行的战略是用 PDE 解答器生成输入输出对配方, 在回归环境下培训神经网络, 并使用经过培训的模型作为解决问题的廉价替代器。 这个计划中的瓶颈是生成数据集所需的PDE 解答器的昂贵查询次数。 为了缓解问题, 我们根据一般变量和简单的随机坐标转换, 提出了一个计算成本低廉的扩增策略 。 我们的方法取决于实体法独立于协调选项, 因此协调系统的变化保留了参数PDE 的类型, 并且只改变 PDE 的数据( 如初始条件、 扩散系数 ) 。 对于试验过的神经网络和部分差异方程式, 提议增量使生成数据集所需的测试错误平均增加23% 。 观测到的最坏的结果是, 多层开关的测试误差增加17%, 而最佳的情况是, dilate 残余网络减少80% 。