We propose characteristics-informed neural networks (CINN), a simple and efficient machine learning approach for solving forward and inverse problems involving hyperbolic PDEs. Like physics-informed neural networks (PINN), CINN is a meshless machine learning solver with universal approximation capabilities. Unlike PINN, which enforces a PDE softly via a multi-part loss function, CINN encodes the characteristics of the PDE in a general-purpose deep neural network by adding a characteristic layer. This neural network is trained with the usual MSE data-fitting regression loss and does not require residual losses on collocation points. This leads to faster training and can avoid well-known pathologies of gradient descent optimization of multi-part PINN loss functions. This paper focuses on linear transport phenomena, in which case it is shown that, if the characteristic ODEs can be solved exactly, then the output of a CINN is an exact solution of the PDE, even at initialization, preventing the occurrence of non-physical solutions. In addition, a CINN can also be trained with soft penalty constraints that enforce, for example, periodic or Neumman boundary conditions, without losing the property that the output satisfies the PDE automatically. We also propose an architecture that extends the CINN approach to linear hyperbolic systems of PDEs. All CINN architectures proposed here can be trained end-to-end from sample data using standard deep learning software. Experiments with the simple advection equation, a stiff periodic advection equation, and an acoustics problem where data from one field is used to predict the other, unseen field, indicate that CINN is able to improve on the accuracy of the baseline PINN, in some cases by a considerable margin, while also being significantly faster to train and avoiding non-physical solutions. An extension to nonlinear PDEs is also briefly discussed.
翻译:我们提出了基于特性的神经网络(CINN),这是一个简单而高效的机器学习模式,用于解决与双曲 PDE 相关的前向和反向问题。与物理学了解的神经网络(PINN)一样,CINN是一个全局近似功能的无线机学习解析器。与PINN不同,PINN通过多部分损失功能软地执行PDE,CIN通过添加一个特性层,将PDE的特性编码到通用的深神经网络中。这个神经网络通过通常的MSE数据回归损失来培训,不需要在合用点上留下剩余损失。这可以导致更快的培训,并避免众所周知的多部分 PINNN 损失函数的梯度下降优化。与PIN NIN 不同的是线性传输现象,如果能够精确解决提议的特性,那么 CINNE 输出就是精确的解决方案,即使在这里初始化,防止出现非物理解决方案的出现。此外, CINNNE还可以通过软的处罚限制来训练,比如, 定期或内线式的平流模型的输出,同时显示一个直线式的输出。