This work introduces Knowledge-Distilled Physics-Informed Neural Networks (KD-PINN), a framework that transfers the predictive accuracy of a high-capacity teacher model to a compact student through a continuous adaptation of the Kullback-Leibler divergence. To confirm its generality for various dynamics and dimensionalities, the framework is evaluated on a representative set of partial differential equations (PDEs). In all tested cases, the student model preserved the teacher's physical accuracy, with a mean RMSE increase below 0.64%, and achieved inference speedups ranging from 4.8x (Navier-Stokes) to 6.9x (Burgers). The distillation process also revealed a regularizing effect. With an average inference latency of 5.3 ms on CPU, the distilled models enter the ultra-low-latency real-time regime defined by sub-10 ms performance. Finally, this study examines how knowledge distillation reduces inference latency in PINNs to contribute to the development of accurate ultra-low-latency neural PDE solvers.
翻译:本研究提出了知识蒸馏物理信息神经网络(KD-PINN),该框架通过持续调整Kullback-Leibler散度,将高容量教师模型的预测精度迁移至紧凑型学生模型。为验证其对不同动力学问题和维度的普适性,该框架在一组代表性偏微分方程(PDE)上进行了评估。在所有测试案例中,学生模型保持了教师模型的物理精度,均方根误差平均增幅低于0.64%,并实现了4.8倍(Navier-Stokes方程)至6.9倍(Burgers方程)的推理加速。蒸馏过程还显示出正则化效应。在CPU上平均推理延迟为5.3毫秒的情况下,蒸馏模型进入了亚10毫秒性能定义的超低延迟实时运行区间。最后,本研究探讨了知识蒸馏如何降低PINNs的推理延迟,以促进高精度超低延迟神经偏微分方程求解器的发展。