Parallel-in-time algorithms provide an additional layer of concurrency for the numerical integration of models based on time-dependent differential equations. Methods like Parareal, which parallelize across multiple time steps, rely on a computationally cheap and coarse integrator to propagate information forward in time, while a parallelizable expensive fine propagator provides accuracy. Typically, the coarse method is a numerical integrator using lower resolution, reduced order or a simplified model. Our paper proposes to use a physics-informed neural network (PINN) instead. We demonstrate for the Black-Scholes equation, a partial differential equation from computational finance, that Parareal with a PINN coarse propagator provides better speedup than a numerical coarse propagator. Training and evaluating a neural network are both tasks whose computing patterns are well suited for GPUs. By contrast, mesh-based algorithms with their low computational intensity struggle to perform well. We show that moving the coarse propagator PINN to a GPU while running the numerical fine propagator on the CPU further improves Parareal's single-node performance. This suggests that integrating machine learning techniques into parallel-in-time integration methods and exploiting their differences in computing patterns might offer a way to better utilize heterogeneous architectures.
翻译:平行时间算法为基于基于时间的差别方程式的模型数字集成提供了一层额外的调子。 类似 parareal 的方法, 它在多个时间步骤之间平行, 依赖一个计算便宜和粗粗的集成器来及时传播信息, 而一个平行的昂贵微调传播器则提供准确性。 通常, 粗糙的方法是一个数字集成器, 使用较低的分辨率、 降低顺序或简化模型。 我们的文件建议使用一个物理知情的神经网络( PINN ) 。 我们为黑螺旋方程式演示了黑螺旋方程式, 一种与计算融资的偏差方程式, 使用PINN 粗略的传播器提供比数字粗略的加速速度更好。 培训和评价一个神经网络是两种任务, 其计算模式都非常适合 GPUPS 。 相比之下, 基于网状的算法及其低计算强度争好的工作。 我们显示, 在运行 CPU 的数字微方程式时, 将 PARE 的 ParaN 和 的单线形集成型集化处理方法, 将更能利用一个平行的集化的集成型模型, 的集成一个平行的模型, 将提出一个学习模式。 这表示它可能利用一个平行的计算机的模型的模型的模型。</s>