Physics-Informed Neural Networks (PINNs) are Neural Network architectures trained to emulate solutions of differential equations without the necessity of solution data. They are currently ubiquitous in the scientific literature due to their flexible and promising settings. However, very little of the available research provides practical studies that aim for a better quantitative understanding of such architecture and its functioning. In this paper, we analyze the performance of PINNs for various architectural hyperparameters and algorithmic settings based on a novel error metric and other factors such as training time. The proposed metric and approach are tailored to evaluate how well a PINN generalizes to points outside its training domain. Besides, we investigate the effect of the algorithmic setup on the outcome prediction of a PINN, inside and outside its training domain, to explore the effect of each hyperparameter. Through our study, we assess how the algorithmic setup of PINNs influences their potential for generalization and deduce the settings which maximize the potential of a PINN for accurate generalization. The study that we present returns insightful and at times counterintuitive results on PINNs. These results can be useful in PINN applications when defining the model and evaluating it.
翻译:内建神经网络(内建神经网络)是受过训练的神经网络结构,可以模仿不同方程式的解决方案,而不需要解决方案数据;由于科学文献中具有灵活和有希望的环境,这些结构目前无处不在;然而,现有的研究很少能提供实用研究,以更好地从数量上了解这种结构及其功能;在本文件中,我们分析各种建筑超参数和算法设置的PINN的性能,基于新颖的错误度量度和诸如培训时间等其他因素;拟议的衡量标准和方法旨在评价PINN一般化在培训领域之外的位置的优点;此外,我们调查算法设置对PINN结果预测的影响,在其培训领域内外,探索每个超参数的效果;通过我们的研究,我们评估PINNs的算法设置如何影响其概括化潜力,并推断出PINN的准确概括化潜力。我们提出的研究在对PINNs进行有见地和有时反向的模型结果评估时,这些结果在PINs定义PIN结果时是有用的。