Physics-Informed Neural Networks have shown unique utility in parameterising the solution of a well-defined partial differential equation using automatic differentiation and residual losses. Though they provide theoretical guarantees of convergence, in practice the required training regimes tend to be exacting and demanding. Through the course of this paper, we take a deep dive into understanding the loss landscapes associated with a PINN and how that offers some insight as to why PINNs are fundamentally hard to optimise for. We demonstrate how PINNs can be forced to converge better towards the solution, by way of feeding in sparse or coarse data as a regulator. The data regulates and morphs the topology of the loss landscape associated with the PINN to make it easily traversable for the minimiser. Data regulation of PINNs helps ease the optimisation required for convergence by invoking a hybrid unsupervised-supervised training approach, where the labelled data pushes the network towards the vicinity of the solution, and the unlabelled regime fine-tunes it to the solution.
翻译:物理内建神经网络在利用自动区分和剩余损失来参数化定义明确的部分差异方程式的解决方案方面显示出独特的效用。虽然它们提供了理论上的趋同保证,但在实践中,所需的培训制度往往是严格和苛刻的。通过本文件,我们深入潜入了解了与PINN相关的损失情况,并了解了为什么PINN从根本上难以优化。我们展示了如何迫使PINN通过将稀少或粗略的数据作为调节者,更好地汇集到解决方案中。数据调节并改变了与PINN相关的损失情况表层结构,以使最小化者容易翻转。PINN的数据管理有助于简化趋同所需的优化,为此采用了混合的、不受监督的培训方法,在这种方法中,贴有标签的数据将网络推向解决方案的附近,而未贴标签的制度则微调其到解决方案中。