Physics-informed neural networks (PINNs) have been increasingly employed due to their capability of modeling complex physics systems. To achieve better expressiveness, increasingly larger network sizes are required in many problems. This has caused challenges when we need to train PINNs on edge devices with limited memory, computing and energy resources. To enable training PINNs on edge devices, this paper proposes an end-to-end compressed PINN based on Tensor-Train decomposition. In solving a Helmholtz equation, our proposed model significantly outperforms the original PINNs with few parameters and achieves satisfactory prediction with up to 15$\times$ overall parameter reduction.
翻译:由于物理知情神经网络(PINNs)具有建模复杂物理学系统的能力,因此这些网络日益得到使用。为了实现更好的表达性,许多问题都需要网络规模的扩大。当我们需要在记忆、计算和能源有限的边缘装置上培训PINNs时,这就带来了挑战。为了能够在边缘装置上培训PINNs,本文件提议在Tensor-Train分解法的基础上采用端到端压缩的PINN。在解决Helmholtz方程式时,我们提议的模型大大超过最初的PINNs的参数,没有多少参数,并实现令人满意的预测,总参数削减达15,000美元。