Neural network-based methods for solving differential equations have been gaining traction. They work by improving the differential equation residuals of a neural network on a sample of points in each iteration. However, most of them employ standard sampling schemes like uniform or perturbing equally spaced points. We present a novel sampling scheme which samples points adversarially to maximize the loss of the current solution estimate. A sampler architecture is described along with the loss terms used for training. Finally, we demonstrate that this scheme outperforms pre-existing schemes by comparing both on a number of problems.
翻译:以神经网络为基础的解决差异方程式的方法一直在得到牵引,它们通过改进神经网络在每种迭代中抽样点的差别方程残余物而发挥作用,但大多数采用标准抽样办法,如统一或扰动等间距点。我们提出了一个新颖的抽样办法,通过对当前解决方案估计的损失进行对抗式抽样,以尽量扩大目前的解决方案估计值的损失。在描述样本结构时,还描述了用于培训的损失术语。最后,我们证明,通过比较若干问题,这一办法优于先前存在的方案。