We analyse the difference in convergence mode using exact versus penalised boundary values for the residual minimisation of PDEs with neural network type ansatz functions, as is commonly done in the context of physics informed neural networks. It is known that using an $L^2$ boundary penalty leads to a loss of regularity of $3/2$ meaning that approximation in $H^2$ yields a priori estimates in $H^{1/2}$. These notes demonstrate how this loss of regularity can be circumvented if the functions in the ansatz class satisfy the boundary values exactly. Furthermore, it is shown that in this case, the loss function provides a consistent a posteriori error estimator in $H^2$ norm made by the residual minimisation method. We provide analogue results for linear time dependent problems and discuss the implications of measuring the residual in Sobolev norms.
翻译:我们用精确的边界值和惩罚性的边界值来分析趋同模式的差异,以便将PDEs的剩余最小化,使其具有神经网络型的 ansatz 功能,这在物理学知情神经网络中是常见的做法,众所周知,使用2美元边界罚款会导致规律性损失3/2美元,这意味着以2美元的近似值得出的先验估计数是1美元1/2美元。这些说明表明如果 ansaz 级的功能完全满足边界值,如何避免这种规律性损失。此外,还表明,在本案中,损失函数提供了一个一致的后遗误估计器,以2美元作为残余最小化方法的规范。我们为线性时间依赖问题提供模拟结果,并讨论在Sobolev 规范中测量剩余值的影响。