We propose a method combining boundary integral equations and neural networks (BINet) to solve partial differential equations (PDEs) in both bounded and unbounded domains. Unlike existing solutions that directly operate over original PDEs, BINet learns to solve, as a proxy, associated boundary integral equations using neural networks. The benefits are three-fold. Firstly, only the boundary conditions need to be fitted since the PDE can be automatically satisfied with single or double layer representations according to the potential theory. Secondly, the dimension of the boundary integral equations is typically smaller, and as such, the sample complexity can be reduced significantly. Lastly, in the proposed method, all differential operators of the original PDEs have been removed, hence the numerical efficiency and stability are improved. Adopting neural tangent kernel (NTK) techniques, we provide proof of the convergence of BINets in the limit that the width of the neural network goes to infinity. Extensive numerical experiments show that, without calculating high-order derivatives, BINet is much easier to train and usually gives more accurate solutions, especially in the cases that the boundary conditions are not smooth enough. Further, BINet outperforms strong baselines for both one single PDE and parameterized PDEs in the bounded and unbounded domains.
翻译:我们建议一种将边界整体方程式和神经网络(BINet)结合起来的方法,以解决受约束和未受约束的领域中部分差异方程式(PDEs)的问题。与直接运行于原PDEs的现有解决方案不同,BINet学会了用神经网络作为代理解决相关边界整体方程式的问题。其好处是三重的。首先,只有边界条件需要加以调整,因为PDE能够根据潜在理论自动满足单层或双层表达。第二,边界整体方程式的尺寸通常较小,因此抽样复杂性可以大大降低。最后,在拟议方法中,所有原PDEs的不同操作者都已被删除,因此数字效率和稳定性得到提高。采用神经相近内核(NTK)技术,我们提供了BINets在神经网络宽度至无限的限度内,即神经网络的宽度可以达到无限的宽度。广泛的数字实验表明,如果不计算高等级衍生物,BINet就更容易进行培训和通常提供更准确的解决方案,特别是在拟议的方法中,最初的操作者都已经消除了,因此数字效率和稳定性得到提高,因此,因此,因此数字效率和稳定性得到了改进。采用神经-DDEI在单一域域域域内,而没有足够地标定的基线的基线的基线。