Deep operator network (DeepONet) has demonstrated great success in various learning tasks, including learning solution operators of partial differential equations. In particular, it provides an efficient approach to predict the evolution equations in a finite time horizon. Nevertheless, the vanilla DeepONet suffers from the issue of stability degradation in the long-time prediction. This paper proposes a {\em transfer-learning} aided DeepONet to enhance the stability. Our idea is to use transfer learning to sequentially update the DeepONets as the surrogates for propagators learned in different time frames. The evolving DeepONets can better track the varying complexities of the evolution equations, while only need to be updated by efficient training of a tiny fraction of the operator networks. Through systematic experiments, we show that the proposed method not only improves the long-time accuracy of DeepONet while maintaining similar computational cost but also substantially reduces the sample size of the training set.
翻译:深操作员网络(DeepONet)在各种学习任务中表现出巨大的成功,包括部分差异方程式的学习解决方案操作员;特别是,它提供了一种高效的方法,在有限的时间范围内预测进化方程式;然而,香草DeepONet在长期预测中存在稳定性退化的问题。本文件建议采用“em转移-学习”帮助DeepONet增强稳定性。我们的想法是,将转移学习用于按顺序更新DeepONets,作为在不同时间框架内学习的助推器。不断演变的DeepONets可以更好地跟踪进化方程式的不同复杂性,而只需要通过对少数操作网络进行有效的培训来更新。我们通过系统实验表明,拟议的方法不仅提高了DeepONet的长期精度,同时保持类似的计算成本,而且还大大降低了培训的样本规模。