We propose a framework for training neural networks that are coupled with partial differential equations (PDEs) in a parallel computing environment. Unlike most distributed computing frameworks for deep neural networks, our focus is to parallelize both numerical solvers and deep neural networks in forward and adjoint computations. Our parallel computing model views data communication as a node in the computational graph for numerical simulations. The advantage of our model is that data communication and computing are cleanly separated and thus provide better flexibility, modularity, and testability. We demonstrate using various large-scale problems that we can achieve substantial acceleration by using parallel solvers for PDEs in training deep neural networks that are coupled with PDEs.
翻译:我们提出了一个在平行计算环境中培训神经网络的框架,同时配有部分差异方程式(PDEs ) 。 与大多数分布式的深神经网络计算框架不同,我们的重点是在前方和联合计算中平行地将数字求解器和深神经网络平行。 我们平行的计算模型将数据通信视为数字模拟计算图中的一个节点。 我们模型的优点是数据通信和计算是干净的分离,从而提供了更好的灵活性、模块性和可测试性。 我们用各种大规模的问题来证明,通过使用PDEs的平行求解器来培训与PDEs并存的深度神经网络,我们可以实现大幅加速。