Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages. The review also attempts to incorporate publications on a broader range of collocation-based physics informed neural networks, which stars form the vanilla PINN, as well as many other variants, such as physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN). The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.
翻译:内建神经网络(PINN)是一个神经网络(NNs),其编码模型方程式是神经网络本身的一部分,例如部分差异方程式(PDE),作为神经网络的一部分。现在,PINN用于解决PDEs、分方方程式、整体差异方程式和随机式 PDEs。这个新颖的方法是一个多任务学习框架,NN必须适应观察到的数据,同时减少PDE的剩余部分。这篇文章全面审查了PINN的文献:虽然研究的主要目的是将这些网络及其相关的利弊定性为这些网络的特点。审查还试图将出版物纳入更广泛的基于合用物理的知情神经神经网络,这些恒星组成了Vanilla PINN,以及许多其他变异,例如物理受限制的神经网络网络(PCNNN)、变异hp-VPINN和保守的PINN(CPINN)等。研究显示,大多数研究的重点是通过不同的激活功能、梯度优化优化技术、古型网络结构中的某些可能的应用范围,这些系统能力体系结构显示其可能的损失范围大于实践。